00:00:00.001 Started by upstream project "autotest-per-patch" build number 127095 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.128 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.157 Using shallow fetch with depth 1 00:00:00.157 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.157 > git --version # timeout=10 00:00:00.180 > git --version # 'git version 2.39.2' 00:00:00.180 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.798 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.808 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.819 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.819 > git config core.sparsecheckout # timeout=10 00:00:05.830 > git read-tree -mu HEAD # timeout=10 00:00:05.847 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.883 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.883 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.976 [Pipeline] Start of Pipeline 00:00:05.987 [Pipeline] library 00:00:05.988 Loading library shm_lib@master 00:00:05.989 Library shm_lib@master is cached. Copying from home. 00:00:06.001 [Pipeline] node 00:00:06.009 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.011 [Pipeline] { 00:00:06.018 [Pipeline] catchError 00:00:06.019 [Pipeline] { 00:00:06.031 [Pipeline] wrap 00:00:06.040 [Pipeline] { 00:00:06.049 [Pipeline] stage 00:00:06.051 [Pipeline] { (Prologue) 00:00:06.245 [Pipeline] sh 00:00:06.526 + logger -p user.info -t JENKINS-CI 00:00:06.546 [Pipeline] echo 00:00:06.548 Node: GP11 00:00:06.556 [Pipeline] sh 00:00:06.904 [Pipeline] setCustomBuildProperty 00:00:06.913 [Pipeline] echo 00:00:06.915 Cleanup processes 00:00:06.919 [Pipeline] sh 00:00:07.194 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.194 1381203 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.203 [Pipeline] sh 00:00:07.478 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.478 ++ awk '{print $1}' 00:00:07.478 ++ grep -v 'sudo pgrep' 00:00:07.478 + sudo kill -9 00:00:07.478 + true 00:00:07.491 [Pipeline] cleanWs 00:00:07.499 [WS-CLEANUP] Deleting project workspace... 00:00:07.499 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.504 [WS-CLEANUP] done 00:00:07.511 [Pipeline] setCustomBuildProperty 00:00:07.524 [Pipeline] sh 00:00:07.804 + sudo git config --global --replace-all safe.directory '*' 00:00:07.889 [Pipeline] httpRequest 00:00:07.924 [Pipeline] echo 00:00:07.926 Sorcerer 10.211.164.101 is alive 00:00:07.934 [Pipeline] httpRequest 00:00:07.937 HttpMethod: GET 00:00:07.938 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.938 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.961 Response Code: HTTP/1.1 200 OK 00:00:07.971 Success: Status code 200 is in the accepted range: 200,404 00:00:07.979 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:28.585 [Pipeline] sh 00:00:28.866 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:28.882 [Pipeline] httpRequest 00:00:28.902 [Pipeline] echo 00:00:28.904 Sorcerer 10.211.164.101 is alive 00:00:28.913 [Pipeline] httpRequest 00:00:28.917 HttpMethod: GET 00:00:28.918 URL: http://10.211.164.101/packages/spdk_2ce15115b6714ac42f587588e9a92ca127eb973d.tar.gz 00:00:28.918 Sending request to url: http://10.211.164.101/packages/spdk_2ce15115b6714ac42f587588e9a92ca127eb973d.tar.gz 00:00:28.920 Response Code: HTTP/1.1 200 OK 00:00:28.920 Success: Status code 200 is in the accepted range: 200,404 00:00:28.921 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2ce15115b6714ac42f587588e9a92ca127eb973d.tar.gz 00:00:45.736 [Pipeline] sh 00:00:46.019 + tar --no-same-owner -xf spdk_2ce15115b6714ac42f587588e9a92ca127eb973d.tar.gz 00:00:48.579 [Pipeline] sh 00:00:48.860 + git -C spdk log --oneline -n5 00:00:48.860 2ce15115b lib/accel: add spdk_accel_append_dix_generate/verify 00:00:48.860 3bc1795d3 accel_perf: add support for DIX Generate/Verify 00:00:48.860 0a6bb28fa test/accel/dif: add DIX Generate/Verify suites 00:00:48.860 52c295e65 lib/accel: add DIX verify 00:00:48.860 b5c6fc4f3 lib/accel: add DIX generate 00:00:48.871 [Pipeline] } 00:00:48.887 [Pipeline] // stage 00:00:48.896 [Pipeline] stage 00:00:48.898 [Pipeline] { (Prepare) 00:00:48.916 [Pipeline] writeFile 00:00:48.933 [Pipeline] sh 00:00:49.213 + logger -p user.info -t JENKINS-CI 00:00:49.225 [Pipeline] sh 00:00:49.505 + logger -p user.info -t JENKINS-CI 00:00:49.517 [Pipeline] sh 00:00:49.817 + cat autorun-spdk.conf 00:00:49.817 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.817 SPDK_TEST_NVMF=1 00:00:49.817 SPDK_TEST_NVME_CLI=1 00:00:49.817 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.817 SPDK_TEST_NVMF_NICS=e810 00:00:49.817 SPDK_TEST_VFIOUSER=1 00:00:49.817 SPDK_RUN_UBSAN=1 00:00:49.817 NET_TYPE=phy 00:00:49.825 RUN_NIGHTLY=0 00:00:49.829 [Pipeline] readFile 00:00:49.855 [Pipeline] withEnv 00:00:49.858 [Pipeline] { 00:00:49.872 [Pipeline] sh 00:00:50.154 + set -ex 00:00:50.154 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:50.154 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.154 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.154 ++ SPDK_TEST_NVMF=1 00:00:50.154 ++ SPDK_TEST_NVME_CLI=1 00:00:50.154 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.154 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.154 ++ SPDK_TEST_VFIOUSER=1 00:00:50.154 ++ SPDK_RUN_UBSAN=1 00:00:50.154 ++ NET_TYPE=phy 00:00:50.154 ++ RUN_NIGHTLY=0 00:00:50.154 + case $SPDK_TEST_NVMF_NICS in 00:00:50.154 + DRIVERS=ice 00:00:50.154 + [[ tcp == \r\d\m\a ]] 00:00:50.154 + [[ -n ice ]] 00:00:50.154 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:50.154 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:50.154 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:50.154 rmmod: ERROR: Module irdma is not currently loaded 00:00:50.154 rmmod: ERROR: Module i40iw is not currently loaded 00:00:50.154 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:50.154 + true 00:00:50.154 + for D in $DRIVERS 00:00:50.154 + sudo modprobe ice 00:00:50.154 + exit 0 00:00:50.163 [Pipeline] } 00:00:50.181 [Pipeline] // withEnv 00:00:50.187 [Pipeline] } 00:00:50.203 [Pipeline] // stage 00:00:50.213 [Pipeline] catchError 00:00:50.215 [Pipeline] { 00:00:50.231 [Pipeline] timeout 00:00:50.231 Timeout set to expire in 50 min 00:00:50.233 [Pipeline] { 00:00:50.249 [Pipeline] stage 00:00:50.251 [Pipeline] { (Tests) 00:00:50.267 [Pipeline] sh 00:00:50.547 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.547 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.547 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.547 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:50.547 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.547 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:50.547 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:50.547 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:50.547 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:50.547 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:50.547 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:50.547 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.547 + source /etc/os-release 00:00:50.547 ++ NAME='Fedora Linux' 00:00:50.547 ++ VERSION='38 (Cloud Edition)' 00:00:50.547 ++ ID=fedora 00:00:50.547 ++ VERSION_ID=38 00:00:50.547 ++ VERSION_CODENAME= 00:00:50.547 ++ PLATFORM_ID=platform:f38 00:00:50.547 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:50.547 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:50.547 ++ LOGO=fedora-logo-icon 00:00:50.547 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:50.547 ++ HOME_URL=https://fedoraproject.org/ 00:00:50.547 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:50.547 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:50.547 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:50.547 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:50.547 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:50.547 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:50.547 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:50.547 ++ SUPPORT_END=2024-05-14 00:00:50.547 ++ VARIANT='Cloud Edition' 00:00:50.547 ++ VARIANT_ID=cloud 00:00:50.547 + uname -a 00:00:50.547 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:50.547 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:51.481 Hugepages 00:00:51.481 node hugesize free / total 00:00:51.481 node0 1048576kB 0 / 0 00:00:51.481 node0 2048kB 0 / 0 00:00:51.481 node1 1048576kB 0 / 0 00:00:51.481 node1 2048kB 0 / 0 00:00:51.481 00:00:51.481 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:51.481 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:51.481 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:51.481 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:51.481 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:51.481 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:51.481 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:51.481 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:51.481 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:51.481 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:51.740 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:51.740 + rm -f /tmp/spdk-ld-path 00:00:51.740 + source autorun-spdk.conf 00:00:51.740 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.740 ++ SPDK_TEST_NVMF=1 00:00:51.740 ++ SPDK_TEST_NVME_CLI=1 00:00:51.740 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.740 ++ SPDK_TEST_NVMF_NICS=e810 00:00:51.740 ++ SPDK_TEST_VFIOUSER=1 00:00:51.740 ++ SPDK_RUN_UBSAN=1 00:00:51.740 ++ NET_TYPE=phy 00:00:51.740 ++ RUN_NIGHTLY=0 00:00:51.740 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:51.740 + [[ -n '' ]] 00:00:51.740 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.740 + for M in /var/spdk/build-*-manifest.txt 00:00:51.740 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:51.740 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:51.740 + for M in /var/spdk/build-*-manifest.txt 00:00:51.740 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:51.740 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:51.740 ++ uname 00:00:51.740 + [[ Linux == \L\i\n\u\x ]] 00:00:51.740 + sudo dmesg -T 00:00:51.740 + sudo dmesg --clear 00:00:51.740 + dmesg_pid=1381877 00:00:51.740 + [[ Fedora Linux == FreeBSD ]] 00:00:51.740 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:51.740 + sudo dmesg -Tw 00:00:51.740 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:51.740 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:51.740 + [[ -x /usr/src/fio-static/fio ]] 00:00:51.740 + export FIO_BIN=/usr/src/fio-static/fio 00:00:51.740 + FIO_BIN=/usr/src/fio-static/fio 00:00:51.740 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:51.740 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:51.740 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:51.740 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:51.740 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:51.740 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:51.740 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:51.740 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:51.740 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:51.740 Test configuration: 00:00:51.740 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.740 SPDK_TEST_NVMF=1 00:00:51.740 SPDK_TEST_NVME_CLI=1 00:00:51.740 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.740 SPDK_TEST_NVMF_NICS=e810 00:00:51.740 SPDK_TEST_VFIOUSER=1 00:00:51.740 SPDK_RUN_UBSAN=1 00:00:51.740 NET_TYPE=phy 00:00:51.740 RUN_NIGHTLY=0 20:28:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:51.740 20:28:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:51.740 20:28:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:51.740 20:28:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:51.740 20:28:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.740 20:28:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.740 20:28:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.740 20:28:47 -- paths/export.sh@5 -- $ export PATH 00:00:51.740 20:28:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.740 20:28:47 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:51.740 20:28:47 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:51.740 20:28:47 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721845727.XXXXXX 00:00:51.740 20:28:47 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721845727.S6wghA 00:00:51.740 20:28:47 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:51.740 20:28:47 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:51.740 20:28:47 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:51.740 20:28:47 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:51.740 20:28:47 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:51.740 20:28:47 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:51.740 20:28:47 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:51.740 20:28:47 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.740 20:28:47 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:51.740 20:28:47 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:51.740 20:28:47 -- pm/common@17 -- $ local monitor 00:00:51.740 20:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.740 20:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.740 20:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.740 20:28:47 -- pm/common@21 -- $ date +%s 00:00:51.740 20:28:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.740 20:28:47 -- pm/common@21 -- $ date +%s 00:00:51.740 20:28:47 -- pm/common@25 -- $ sleep 1 00:00:51.740 20:28:47 -- pm/common@21 -- $ date +%s 00:00:51.740 20:28:47 -- pm/common@21 -- $ date +%s 00:00:51.740 20:28:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721845727 00:00:51.740 20:28:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721845727 00:00:51.740 20:28:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721845727 00:00:51.740 20:28:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721845727 00:00:51.740 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721845727_collect-vmstat.pm.log 00:00:51.740 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721845727_collect-cpu-load.pm.log 00:00:51.741 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721845727_collect-cpu-temp.pm.log 00:00:51.741 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721845727_collect-bmc-pm.bmc.pm.log 00:00:52.675 20:28:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:52.675 20:28:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:52.675 20:28:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:52.675 20:28:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.675 20:28:48 -- spdk/autobuild.sh@16 -- $ date -u 00:00:52.675 Wed Jul 24 06:28:48 PM UTC 2024 00:00:52.675 20:28:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:52.934 v24.09-pre-321-g2ce15115b 00:00:52.934 20:28:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:52.934 20:28:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:52.934 20:28:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:52.934 20:28:48 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:52.934 20:28:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:52.934 20:28:48 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.934 ************************************ 00:00:52.934 START TEST ubsan 00:00:52.934 ************************************ 00:00:52.934 20:28:48 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:52.934 using ubsan 00:00:52.934 00:00:52.934 real 0m0.000s 00:00:52.934 user 0m0.000s 00:00:52.934 sys 0m0.000s 00:00:52.934 20:28:48 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:52.934 20:28:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:52.934 ************************************ 00:00:52.934 END TEST ubsan 00:00:52.934 ************************************ 00:00:52.934 20:28:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:52.934 20:28:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:52.934 20:28:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:52.934 20:28:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:52.934 20:28:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:52.934 20:28:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:52.934 20:28:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:52.934 20:28:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:52.934 20:28:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:52.934 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:52.934 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:53.192 Using 'verbs' RDMA provider 00:01:03.732 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:13.708 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:13.708 Creating mk/config.mk...done. 00:01:13.708 Creating mk/cc.flags.mk...done. 00:01:13.708 Type 'make' to build. 00:01:13.708 20:29:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:13.708 20:29:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:13.708 20:29:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:13.708 20:29:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.708 ************************************ 00:01:13.708 START TEST make 00:01:13.708 ************************************ 00:01:13.708 20:29:08 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:13.708 make[1]: Nothing to be done for 'all'. 00:01:14.651 The Meson build system 00:01:14.651 Version: 1.3.1 00:01:14.651 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:14.651 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:14.651 Build type: native build 00:01:14.651 Project name: libvfio-user 00:01:14.651 Project version: 0.0.1 00:01:14.651 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:14.651 C linker for the host machine: cc ld.bfd 2.39-16 00:01:14.651 Host machine cpu family: x86_64 00:01:14.651 Host machine cpu: x86_64 00:01:14.651 Run-time dependency threads found: YES 00:01:14.651 Library dl found: YES 00:01:14.651 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:14.651 Run-time dependency json-c found: YES 0.17 00:01:14.651 Run-time dependency cmocka found: YES 1.1.7 00:01:14.651 Program pytest-3 found: NO 00:01:14.651 Program flake8 found: NO 00:01:14.651 Program misspell-fixer found: NO 00:01:14.651 Program restructuredtext-lint found: NO 00:01:14.651 Program valgrind found: YES (/usr/bin/valgrind) 00:01:14.651 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:14.651 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:14.651 Compiler for C supports arguments -Wwrite-strings: YES 00:01:14.651 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.651 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:14.651 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:14.652 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.652 Build targets in project: 8 00:01:14.652 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:14.652 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:14.652 00:01:14.652 libvfio-user 0.0.1 00:01:14.652 00:01:14.652 User defined options 00:01:14.652 buildtype : debug 00:01:14.652 default_library: shared 00:01:14.652 libdir : /usr/local/lib 00:01:14.652 00:01:14.652 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:15.609 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:15.609 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:15.609 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:15.867 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:15.868 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:15.868 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:15.868 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:15.868 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:15.868 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:15.868 [9/37] Compiling C object samples/null.p/null.c.o 00:01:15.868 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:15.868 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:15.868 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:15.868 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:15.868 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:15.868 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:15.868 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:15.868 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:15.868 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:15.868 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:15.868 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:15.868 [21/37] Compiling C object samples/server.p/server.c.o 00:01:15.868 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:15.868 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:15.868 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:16.126 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:16.126 [26/37] Compiling C object samples/client.p/client.c.o 00:01:16.126 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:16.126 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:16.126 [29/37] Linking target samples/client 00:01:16.126 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:16.126 [31/37] Linking target test/unit_tests 00:01:16.391 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:16.391 [33/37] Linking target samples/server 00:01:16.391 [34/37] Linking target samples/gpio-pci-idio-16 00:01:16.391 [35/37] Linking target samples/null 00:01:16.391 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:16.391 [37/37] Linking target samples/lspci 00:01:16.391 INFO: autodetecting backend as ninja 00:01:16.391 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.650 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.274 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.274 ninja: no work to do. 00:01:22.543 The Meson build system 00:01:22.543 Version: 1.3.1 00:01:22.543 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:22.543 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:22.543 Build type: native build 00:01:22.543 Program cat found: YES (/usr/bin/cat) 00:01:22.543 Project name: DPDK 00:01:22.543 Project version: 24.03.0 00:01:22.543 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:22.543 C linker for the host machine: cc ld.bfd 2.39-16 00:01:22.543 Host machine cpu family: x86_64 00:01:22.543 Host machine cpu: x86_64 00:01:22.543 Message: ## Building in Developer Mode ## 00:01:22.543 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:22.544 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:22.544 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:22.544 Program python3 found: YES (/usr/bin/python3) 00:01:22.544 Program cat found: YES (/usr/bin/cat) 00:01:22.544 Compiler for C supports arguments -march=native: YES 00:01:22.544 Checking for size of "void *" : 8 00:01:22.544 Checking for size of "void *" : 8 (cached) 00:01:22.544 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:22.544 Library m found: YES 00:01:22.544 Library numa found: YES 00:01:22.544 Has header "numaif.h" : YES 00:01:22.544 Library fdt found: NO 00:01:22.544 Library execinfo found: NO 00:01:22.544 Has header "execinfo.h" : YES 00:01:22.544 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:22.544 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:22.544 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:22.544 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:22.544 Run-time dependency openssl found: YES 3.0.9 00:01:22.544 Run-time dependency libpcap found: YES 1.10.4 00:01:22.544 Has header "pcap.h" with dependency libpcap: YES 00:01:22.544 Compiler for C supports arguments -Wcast-qual: YES 00:01:22.544 Compiler for C supports arguments -Wdeprecated: YES 00:01:22.544 Compiler for C supports arguments -Wformat: YES 00:01:22.544 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:22.544 Compiler for C supports arguments -Wformat-security: NO 00:01:22.544 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.544 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:22.544 Compiler for C supports arguments -Wnested-externs: YES 00:01:22.544 Compiler for C supports arguments -Wold-style-definition: YES 00:01:22.544 Compiler for C supports arguments -Wpointer-arith: YES 00:01:22.544 Compiler for C supports arguments -Wsign-compare: YES 00:01:22.544 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:22.544 Compiler for C supports arguments -Wundef: YES 00:01:22.544 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.544 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:22.544 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:22.544 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.544 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:22.544 Program objdump found: YES (/usr/bin/objdump) 00:01:22.544 Compiler for C supports arguments -mavx512f: YES 00:01:22.544 Checking if "AVX512 checking" compiles: YES 00:01:22.544 Fetching value of define "__SSE4_2__" : 1 00:01:22.544 Fetching value of define "__AES__" : 1 00:01:22.544 Fetching value of define "__AVX__" : 1 00:01:22.544 Fetching value of define "__AVX2__" : (undefined) 00:01:22.544 Fetching value of define "__AVX512BW__" : (undefined) 00:01:22.544 Fetching value of define "__AVX512CD__" : (undefined) 00:01:22.544 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:22.544 Fetching value of define "__AVX512F__" : (undefined) 00:01:22.544 Fetching value of define "__AVX512VL__" : (undefined) 00:01:22.544 Fetching value of define "__PCLMUL__" : 1 00:01:22.544 Fetching value of define "__RDRND__" : 1 00:01:22.544 Fetching value of define "__RDSEED__" : (undefined) 00:01:22.544 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:22.544 Fetching value of define "__znver1__" : (undefined) 00:01:22.544 Fetching value of define "__znver2__" : (undefined) 00:01:22.544 Fetching value of define "__znver3__" : (undefined) 00:01:22.544 Fetching value of define "__znver4__" : (undefined) 00:01:22.544 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:22.544 Message: lib/log: Defining dependency "log" 00:01:22.544 Message: lib/kvargs: Defining dependency "kvargs" 00:01:22.544 Message: lib/telemetry: Defining dependency "telemetry" 00:01:22.544 Checking for function "getentropy" : NO 00:01:22.544 Message: lib/eal: Defining dependency "eal" 00:01:22.544 Message: lib/ring: Defining dependency "ring" 00:01:22.544 Message: lib/rcu: Defining dependency "rcu" 00:01:22.544 Message: lib/mempool: Defining dependency "mempool" 00:01:22.544 Message: lib/mbuf: Defining dependency "mbuf" 00:01:22.544 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:22.544 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:22.544 Compiler for C supports arguments -mpclmul: YES 00:01:22.544 Compiler for C supports arguments -maes: YES 00:01:22.544 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.544 Compiler for C supports arguments -mavx512bw: YES 00:01:22.544 Compiler for C supports arguments -mavx512dq: YES 00:01:22.544 Compiler for C supports arguments -mavx512vl: YES 00:01:22.544 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:22.544 Compiler for C supports arguments -mavx2: YES 00:01:22.544 Compiler for C supports arguments -mavx: YES 00:01:22.544 Message: lib/net: Defining dependency "net" 00:01:22.544 Message: lib/meter: Defining dependency "meter" 00:01:22.544 Message: lib/ethdev: Defining dependency "ethdev" 00:01:22.544 Message: lib/pci: Defining dependency "pci" 00:01:22.544 Message: lib/cmdline: Defining dependency "cmdline" 00:01:22.544 Message: lib/hash: Defining dependency "hash" 00:01:22.544 Message: lib/timer: Defining dependency "timer" 00:01:22.544 Message: lib/compressdev: Defining dependency "compressdev" 00:01:22.544 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:22.544 Message: lib/dmadev: Defining dependency "dmadev" 00:01:22.544 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:22.544 Message: lib/power: Defining dependency "power" 00:01:22.544 Message: lib/reorder: Defining dependency "reorder" 00:01:22.544 Message: lib/security: Defining dependency "security" 00:01:22.544 Has header "linux/userfaultfd.h" : YES 00:01:22.544 Has header "linux/vduse.h" : YES 00:01:22.544 Message: lib/vhost: Defining dependency "vhost" 00:01:22.544 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:22.544 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:22.544 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:22.544 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:22.544 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:22.544 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:22.544 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:22.544 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:22.544 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:22.544 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:22.544 Program doxygen found: YES (/usr/bin/doxygen) 00:01:22.544 Configuring doxy-api-html.conf using configuration 00:01:22.544 Configuring doxy-api-man.conf using configuration 00:01:22.544 Program mandb found: YES (/usr/bin/mandb) 00:01:22.544 Program sphinx-build found: NO 00:01:22.544 Configuring rte_build_config.h using configuration 00:01:22.544 Message: 00:01:22.544 ================= 00:01:22.544 Applications Enabled 00:01:22.544 ================= 00:01:22.544 00:01:22.544 apps: 00:01:22.544 00:01:22.544 00:01:22.544 Message: 00:01:22.544 ================= 00:01:22.544 Libraries Enabled 00:01:22.544 ================= 00:01:22.544 00:01:22.544 libs: 00:01:22.544 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:22.544 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:22.544 cryptodev, dmadev, power, reorder, security, vhost, 00:01:22.544 00:01:22.544 Message: 00:01:22.544 =============== 00:01:22.544 Drivers Enabled 00:01:22.544 =============== 00:01:22.544 00:01:22.544 common: 00:01:22.544 00:01:22.544 bus: 00:01:22.544 pci, vdev, 00:01:22.544 mempool: 00:01:22.544 ring, 00:01:22.544 dma: 00:01:22.544 00:01:22.544 net: 00:01:22.544 00:01:22.544 crypto: 00:01:22.544 00:01:22.544 compress: 00:01:22.544 00:01:22.544 vdpa: 00:01:22.544 00:01:22.544 00:01:22.544 Message: 00:01:22.544 ================= 00:01:22.544 Content Skipped 00:01:22.544 ================= 00:01:22.544 00:01:22.544 apps: 00:01:22.544 dumpcap: explicitly disabled via build config 00:01:22.544 graph: explicitly disabled via build config 00:01:22.544 pdump: explicitly disabled via build config 00:01:22.544 proc-info: explicitly disabled via build config 00:01:22.544 test-acl: explicitly disabled via build config 00:01:22.544 test-bbdev: explicitly disabled via build config 00:01:22.544 test-cmdline: explicitly disabled via build config 00:01:22.544 test-compress-perf: explicitly disabled via build config 00:01:22.544 test-crypto-perf: explicitly disabled via build config 00:01:22.544 test-dma-perf: explicitly disabled via build config 00:01:22.544 test-eventdev: explicitly disabled via build config 00:01:22.544 test-fib: explicitly disabled via build config 00:01:22.544 test-flow-perf: explicitly disabled via build config 00:01:22.544 test-gpudev: explicitly disabled via build config 00:01:22.544 test-mldev: explicitly disabled via build config 00:01:22.544 test-pipeline: explicitly disabled via build config 00:01:22.544 test-pmd: explicitly disabled via build config 00:01:22.544 test-regex: explicitly disabled via build config 00:01:22.544 test-sad: explicitly disabled via build config 00:01:22.544 test-security-perf: explicitly disabled via build config 00:01:22.544 00:01:22.544 libs: 00:01:22.544 argparse: explicitly disabled via build config 00:01:22.544 metrics: explicitly disabled via build config 00:01:22.544 acl: explicitly disabled via build config 00:01:22.544 bbdev: explicitly disabled via build config 00:01:22.544 bitratestats: explicitly disabled via build config 00:01:22.544 bpf: explicitly disabled via build config 00:01:22.544 cfgfile: explicitly disabled via build config 00:01:22.544 distributor: explicitly disabled via build config 00:01:22.544 efd: explicitly disabled via build config 00:01:22.544 eventdev: explicitly disabled via build config 00:01:22.545 dispatcher: explicitly disabled via build config 00:01:22.545 gpudev: explicitly disabled via build config 00:01:22.545 gro: explicitly disabled via build config 00:01:22.545 gso: explicitly disabled via build config 00:01:22.545 ip_frag: explicitly disabled via build config 00:01:22.545 jobstats: explicitly disabled via build config 00:01:22.545 latencystats: explicitly disabled via build config 00:01:22.545 lpm: explicitly disabled via build config 00:01:22.545 member: explicitly disabled via build config 00:01:22.545 pcapng: explicitly disabled via build config 00:01:22.545 rawdev: explicitly disabled via build config 00:01:22.545 regexdev: explicitly disabled via build config 00:01:22.545 mldev: explicitly disabled via build config 00:01:22.545 rib: explicitly disabled via build config 00:01:22.545 sched: explicitly disabled via build config 00:01:22.545 stack: explicitly disabled via build config 00:01:22.545 ipsec: explicitly disabled via build config 00:01:22.545 pdcp: explicitly disabled via build config 00:01:22.545 fib: explicitly disabled via build config 00:01:22.545 port: explicitly disabled via build config 00:01:22.545 pdump: explicitly disabled via build config 00:01:22.545 table: explicitly disabled via build config 00:01:22.545 pipeline: explicitly disabled via build config 00:01:22.545 graph: explicitly disabled via build config 00:01:22.545 node: explicitly disabled via build config 00:01:22.545 00:01:22.545 drivers: 00:01:22.545 common/cpt: not in enabled drivers build config 00:01:22.545 common/dpaax: not in enabled drivers build config 00:01:22.545 common/iavf: not in enabled drivers build config 00:01:22.545 common/idpf: not in enabled drivers build config 00:01:22.545 common/ionic: not in enabled drivers build config 00:01:22.545 common/mvep: not in enabled drivers build config 00:01:22.545 common/octeontx: not in enabled drivers build config 00:01:22.545 bus/auxiliary: not in enabled drivers build config 00:01:22.545 bus/cdx: not in enabled drivers build config 00:01:22.545 bus/dpaa: not in enabled drivers build config 00:01:22.545 bus/fslmc: not in enabled drivers build config 00:01:22.545 bus/ifpga: not in enabled drivers build config 00:01:22.545 bus/platform: not in enabled drivers build config 00:01:22.545 bus/uacce: not in enabled drivers build config 00:01:22.545 bus/vmbus: not in enabled drivers build config 00:01:22.545 common/cnxk: not in enabled drivers build config 00:01:22.545 common/mlx5: not in enabled drivers build config 00:01:22.545 common/nfp: not in enabled drivers build config 00:01:22.545 common/nitrox: not in enabled drivers build config 00:01:22.545 common/qat: not in enabled drivers build config 00:01:22.545 common/sfc_efx: not in enabled drivers build config 00:01:22.545 mempool/bucket: not in enabled drivers build config 00:01:22.545 mempool/cnxk: not in enabled drivers build config 00:01:22.545 mempool/dpaa: not in enabled drivers build config 00:01:22.545 mempool/dpaa2: not in enabled drivers build config 00:01:22.545 mempool/octeontx: not in enabled drivers build config 00:01:22.545 mempool/stack: not in enabled drivers build config 00:01:22.545 dma/cnxk: not in enabled drivers build config 00:01:22.545 dma/dpaa: not in enabled drivers build config 00:01:22.545 dma/dpaa2: not in enabled drivers build config 00:01:22.545 dma/hisilicon: not in enabled drivers build config 00:01:22.545 dma/idxd: not in enabled drivers build config 00:01:22.545 dma/ioat: not in enabled drivers build config 00:01:22.545 dma/skeleton: not in enabled drivers build config 00:01:22.545 net/af_packet: not in enabled drivers build config 00:01:22.545 net/af_xdp: not in enabled drivers build config 00:01:22.545 net/ark: not in enabled drivers build config 00:01:22.545 net/atlantic: not in enabled drivers build config 00:01:22.545 net/avp: not in enabled drivers build config 00:01:22.545 net/axgbe: not in enabled drivers build config 00:01:22.545 net/bnx2x: not in enabled drivers build config 00:01:22.545 net/bnxt: not in enabled drivers build config 00:01:22.545 net/bonding: not in enabled drivers build config 00:01:22.545 net/cnxk: not in enabled drivers build config 00:01:22.545 net/cpfl: not in enabled drivers build config 00:01:22.545 net/cxgbe: not in enabled drivers build config 00:01:22.545 net/dpaa: not in enabled drivers build config 00:01:22.545 net/dpaa2: not in enabled drivers build config 00:01:22.545 net/e1000: not in enabled drivers build config 00:01:22.545 net/ena: not in enabled drivers build config 00:01:22.545 net/enetc: not in enabled drivers build config 00:01:22.545 net/enetfec: not in enabled drivers build config 00:01:22.545 net/enic: not in enabled drivers build config 00:01:22.545 net/failsafe: not in enabled drivers build config 00:01:22.545 net/fm10k: not in enabled drivers build config 00:01:22.545 net/gve: not in enabled drivers build config 00:01:22.545 net/hinic: not in enabled drivers build config 00:01:22.545 net/hns3: not in enabled drivers build config 00:01:22.545 net/i40e: not in enabled drivers build config 00:01:22.545 net/iavf: not in enabled drivers build config 00:01:22.545 net/ice: not in enabled drivers build config 00:01:22.545 net/idpf: not in enabled drivers build config 00:01:22.545 net/igc: not in enabled drivers build config 00:01:22.545 net/ionic: not in enabled drivers build config 00:01:22.545 net/ipn3ke: not in enabled drivers build config 00:01:22.545 net/ixgbe: not in enabled drivers build config 00:01:22.545 net/mana: not in enabled drivers build config 00:01:22.545 net/memif: not in enabled drivers build config 00:01:22.545 net/mlx4: not in enabled drivers build config 00:01:22.545 net/mlx5: not in enabled drivers build config 00:01:22.545 net/mvneta: not in enabled drivers build config 00:01:22.545 net/mvpp2: not in enabled drivers build config 00:01:22.545 net/netvsc: not in enabled drivers build config 00:01:22.545 net/nfb: not in enabled drivers build config 00:01:22.545 net/nfp: not in enabled drivers build config 00:01:22.545 net/ngbe: not in enabled drivers build config 00:01:22.545 net/null: not in enabled drivers build config 00:01:22.545 net/octeontx: not in enabled drivers build config 00:01:22.545 net/octeon_ep: not in enabled drivers build config 00:01:22.545 net/pcap: not in enabled drivers build config 00:01:22.545 net/pfe: not in enabled drivers build config 00:01:22.545 net/qede: not in enabled drivers build config 00:01:22.545 net/ring: not in enabled drivers build config 00:01:22.545 net/sfc: not in enabled drivers build config 00:01:22.545 net/softnic: not in enabled drivers build config 00:01:22.545 net/tap: not in enabled drivers build config 00:01:22.545 net/thunderx: not in enabled drivers build config 00:01:22.545 net/txgbe: not in enabled drivers build config 00:01:22.545 net/vdev_netvsc: not in enabled drivers build config 00:01:22.545 net/vhost: not in enabled drivers build config 00:01:22.545 net/virtio: not in enabled drivers build config 00:01:22.545 net/vmxnet3: not in enabled drivers build config 00:01:22.545 raw/*: missing internal dependency, "rawdev" 00:01:22.545 crypto/armv8: not in enabled drivers build config 00:01:22.545 crypto/bcmfs: not in enabled drivers build config 00:01:22.545 crypto/caam_jr: not in enabled drivers build config 00:01:22.545 crypto/ccp: not in enabled drivers build config 00:01:22.545 crypto/cnxk: not in enabled drivers build config 00:01:22.545 crypto/dpaa_sec: not in enabled drivers build config 00:01:22.545 crypto/dpaa2_sec: not in enabled drivers build config 00:01:22.545 crypto/ipsec_mb: not in enabled drivers build config 00:01:22.545 crypto/mlx5: not in enabled drivers build config 00:01:22.545 crypto/mvsam: not in enabled drivers build config 00:01:22.545 crypto/nitrox: not in enabled drivers build config 00:01:22.545 crypto/null: not in enabled drivers build config 00:01:22.545 crypto/octeontx: not in enabled drivers build config 00:01:22.545 crypto/openssl: not in enabled drivers build config 00:01:22.545 crypto/scheduler: not in enabled drivers build config 00:01:22.545 crypto/uadk: not in enabled drivers build config 00:01:22.545 crypto/virtio: not in enabled drivers build config 00:01:22.545 compress/isal: not in enabled drivers build config 00:01:22.545 compress/mlx5: not in enabled drivers build config 00:01:22.545 compress/nitrox: not in enabled drivers build config 00:01:22.545 compress/octeontx: not in enabled drivers build config 00:01:22.545 compress/zlib: not in enabled drivers build config 00:01:22.545 regex/*: missing internal dependency, "regexdev" 00:01:22.545 ml/*: missing internal dependency, "mldev" 00:01:22.545 vdpa/ifc: not in enabled drivers build config 00:01:22.545 vdpa/mlx5: not in enabled drivers build config 00:01:22.545 vdpa/nfp: not in enabled drivers build config 00:01:22.545 vdpa/sfc: not in enabled drivers build config 00:01:22.545 event/*: missing internal dependency, "eventdev" 00:01:22.545 baseband/*: missing internal dependency, "bbdev" 00:01:22.545 gpu/*: missing internal dependency, "gpudev" 00:01:22.545 00:01:22.545 00:01:22.545 Build targets in project: 85 00:01:22.545 00:01:22.545 DPDK 24.03.0 00:01:22.545 00:01:22.546 User defined options 00:01:22.546 buildtype : debug 00:01:22.546 default_library : shared 00:01:22.546 libdir : lib 00:01:22.546 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:22.546 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:22.546 c_link_args : 00:01:22.546 cpu_instruction_set: native 00:01:22.546 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:22.546 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:22.546 enable_docs : false 00:01:22.546 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:22.546 enable_kmods : false 00:01:22.546 max_lcores : 128 00:01:22.546 tests : false 00:01:22.546 00:01:22.546 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:22.546 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:22.546 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:22.546 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:22.546 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:22.546 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:22.546 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:22.546 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:22.546 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:22.546 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:22.546 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:22.546 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:22.546 [11/268] Linking static target lib/librte_kvargs.a 00:01:22.546 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:22.546 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:22.807 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:22.807 [15/268] Linking static target lib/librte_log.a 00:01:22.807 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:23.377 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.377 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:23.377 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:23.377 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:23.377 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:23.377 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:23.377 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.377 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:23.377 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:23.377 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:23.377 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:23.377 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.377 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:23.377 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:23.377 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.377 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.647 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:23.647 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:23.647 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.647 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.647 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.647 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.647 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.647 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.647 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:23.647 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:23.647 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:23.647 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:23.647 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:23.647 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.647 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:23.647 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.647 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:23.647 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.647 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:23.647 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:23.647 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:23.647 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.647 [55/268] Linking static target lib/librte_telemetry.a 00:01:23.647 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:23.647 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:23.647 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:23.647 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:23.647 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.647 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:23.647 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:23.914 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.914 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:23.914 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.914 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:23.914 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:24.178 [68/268] Linking target lib/librte_log.so.24.1 00:01:24.178 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.178 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.178 [71/268] Linking static target lib/librte_pci.a 00:01:24.178 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:24.179 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.179 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:24.437 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:24.437 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:24.437 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.437 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:24.437 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:24.437 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:24.437 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.437 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.437 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:24.437 [84/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:24.437 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.437 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:24.437 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:24.437 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.437 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.437 [90/268] Linking static target lib/librte_ring.a 00:01:24.437 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.437 [92/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:24.437 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:24.437 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:24.437 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.437 [96/268] Linking target lib/librte_kvargs.so.24.1 00:01:24.437 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:24.437 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:24.698 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:24.698 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:24.698 [101/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:24.698 [102/268] Linking static target lib/librte_meter.a 00:01:24.698 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.698 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:24.698 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:24.698 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:24.698 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:24.698 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:24.698 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:24.698 [110/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.698 [111/268] Linking static target lib/librte_eal.a 00:01:24.698 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:24.698 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:24.698 [114/268] Linking static target lib/librte_mempool.a 00:01:24.698 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.698 [116/268] Linking target lib/librte_telemetry.so.24.1 00:01:24.698 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:24.698 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:24.698 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:24.698 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.698 [121/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:24.698 [122/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:24.960 [123/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.960 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:24.960 [125/268] Linking static target lib/librte_rcu.a 00:01:24.960 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:24.960 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:24.960 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:24.960 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:24.960 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:24.960 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:24.960 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:24.960 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:24.960 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:25.221 [135/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.221 [136/268] Linking static target lib/librte_net.a 00:01:25.221 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.221 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.221 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.221 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.221 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.221 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.221 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.222 [144/268] Linking static target lib/librte_cmdline.a 00:01:25.481 [145/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.481 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:25.481 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.481 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.481 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:25.481 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.481 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:25.481 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:25.481 [153/268] Linking static target lib/librte_timer.a 00:01:25.740 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:25.740 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:25.740 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:25.740 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:25.740 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.740 [159/268] Linking static target lib/librte_dmadev.a 00:01:25.740 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:25.740 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:25.740 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:25.740 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:25.740 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.741 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:25.741 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:25.741 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:25.741 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:25.999 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:25.999 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:25.999 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:25.999 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:25.999 [173/268] Linking static target lib/librte_hash.a 00:01:25.999 [174/268] Linking static target lib/librte_power.a 00:01:25.999 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:25.999 [176/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.999 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:25.999 [178/268] Linking static target lib/librte_compressdev.a 00:01:25.999 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:25.999 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.999 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:26.257 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:26.257 [183/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:26.257 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:26.257 [185/268] Linking static target lib/librte_mbuf.a 00:01:26.257 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:26.257 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.257 [188/268] Linking static target lib/librte_reorder.a 00:01:26.257 [189/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:26.257 [190/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:26.257 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.257 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:26.257 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:26.257 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:26.257 [195/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:26.257 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:26.257 [197/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:26.516 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:26.516 [199/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:26.516 [200/268] Linking static target lib/librte_security.a 00:01:26.516 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:26.516 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.516 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.516 [204/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.516 [205/268] Linking static target drivers/librte_bus_vdev.a 00:01:26.516 [206/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.516 [207/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.516 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.516 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:26.516 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.516 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.516 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:26.516 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:26.516 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:26.516 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.516 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.516 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.516 [218/268] Linking static target drivers/librte_bus_pci.a 00:01:26.516 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:26.784 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.785 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.785 [222/268] Linking static target lib/librte_ethdev.a 00:01:26.785 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:26.785 [224/268] Linking static target lib/librte_cryptodev.a 00:01:26.785 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.050 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.982 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.355 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:31.260 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.260 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.260 [231/268] Linking target lib/librte_eal.so.24.1 00:01:31.260 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:31.260 [233/268] Linking target lib/librte_ring.so.24.1 00:01:31.260 [234/268] Linking target lib/librte_meter.so.24.1 00:01:31.260 [235/268] Linking target lib/librte_timer.so.24.1 00:01:31.260 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:31.260 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:31.260 [238/268] Linking target lib/librte_pci.so.24.1 00:01:31.260 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:31.260 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:31.260 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:31.517 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:31.517 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:31.517 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:31.517 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:31.517 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:31.517 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:31.517 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:31.517 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:31.517 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:31.775 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:31.775 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:31.775 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:31.775 [254/268] Linking target lib/librte_net.so.24.1 00:01:31.775 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:31.775 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:31.775 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:32.032 [258/268] Linking target lib/librte_security.so.24.1 00:01:32.032 [259/268] Linking target lib/librte_hash.so.24.1 00:01:32.032 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:32.032 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:32.032 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:32.032 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:32.032 [264/268] Linking target lib/librte_power.so.24.1 00:01:34.557 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.557 [266/268] Linking static target lib/librte_vhost.a 00:01:35.489 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.489 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:35.489 INFO: autodetecting backend as ninja 00:01:35.489 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:36.461 CC lib/log/log.o 00:01:36.461 CC lib/ut/ut.o 00:01:36.461 CC lib/log/log_flags.o 00:01:36.461 CC lib/log/log_deprecated.o 00:01:36.461 CC lib/ut_mock/mock.o 00:01:36.718 LIB libspdk_log.a 00:01:36.718 LIB libspdk_ut.a 00:01:36.718 LIB libspdk_ut_mock.a 00:01:36.718 SO libspdk_ut.so.2.0 00:01:36.718 SO libspdk_log.so.7.0 00:01:36.718 SO libspdk_ut_mock.so.6.0 00:01:36.718 SYMLINK libspdk_ut.so 00:01:36.718 SYMLINK libspdk_ut_mock.so 00:01:36.718 SYMLINK libspdk_log.so 00:01:36.976 CC lib/dma/dma.o 00:01:36.976 CC lib/util/base64.o 00:01:36.976 CXX lib/trace_parser/trace.o 00:01:36.976 CC lib/ioat/ioat.o 00:01:36.976 CC lib/util/bit_array.o 00:01:36.976 CC lib/util/cpuset.o 00:01:36.976 CC lib/util/crc16.o 00:01:36.976 CC lib/util/crc32.o 00:01:36.976 CC lib/util/crc32c.o 00:01:36.976 CC lib/util/crc32_ieee.o 00:01:36.976 CC lib/util/crc64.o 00:01:36.976 CC lib/util/dif.o 00:01:36.976 CC lib/util/fd.o 00:01:36.976 CC lib/util/fd_group.o 00:01:36.976 CC lib/util/file.o 00:01:36.976 CC lib/util/hexlify.o 00:01:36.976 CC lib/util/iov.o 00:01:36.976 CC lib/util/math.o 00:01:36.976 CC lib/util/net.o 00:01:36.976 CC lib/util/pipe.o 00:01:36.976 CC lib/util/strerror_tls.o 00:01:36.976 CC lib/util/string.o 00:01:36.976 CC lib/util/uuid.o 00:01:36.976 CC lib/util/zipf.o 00:01:36.976 CC lib/util/xor.o 00:01:36.976 CC lib/vfio_user/host/vfio_user_pci.o 00:01:36.976 CC lib/vfio_user/host/vfio_user.o 00:01:37.234 LIB libspdk_dma.a 00:01:37.234 SO libspdk_dma.so.4.0 00:01:37.234 LIB libspdk_ioat.a 00:01:37.234 SYMLINK libspdk_dma.so 00:01:37.234 SO libspdk_ioat.so.7.0 00:01:37.234 SYMLINK libspdk_ioat.so 00:01:37.234 LIB libspdk_vfio_user.a 00:01:37.234 SO libspdk_vfio_user.so.5.0 00:01:37.492 SYMLINK libspdk_vfio_user.so 00:01:37.492 LIB libspdk_util.a 00:01:37.492 SO libspdk_util.so.10.0 00:01:37.750 SYMLINK libspdk_util.so 00:01:37.750 LIB libspdk_trace_parser.a 00:01:37.750 SO libspdk_trace_parser.so.5.0 00:01:38.012 CC lib/idxd/idxd.o 00:01:38.012 CC lib/conf/conf.o 00:01:38.012 CC lib/json/json_parse.o 00:01:38.012 CC lib/rdma_provider/common.o 00:01:38.012 CC lib/idxd/idxd_user.o 00:01:38.012 CC lib/rdma_utils/rdma_utils.o 00:01:38.012 CC lib/json/json_util.o 00:01:38.012 CC lib/vmd/vmd.o 00:01:38.012 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:38.012 CC lib/env_dpdk/env.o 00:01:38.012 CC lib/idxd/idxd_kernel.o 00:01:38.012 CC lib/vmd/led.o 00:01:38.012 CC lib/json/json_write.o 00:01:38.012 CC lib/env_dpdk/memory.o 00:01:38.012 CC lib/env_dpdk/pci.o 00:01:38.012 CC lib/env_dpdk/init.o 00:01:38.012 CC lib/env_dpdk/threads.o 00:01:38.012 CC lib/env_dpdk/pci_ioat.o 00:01:38.012 CC lib/env_dpdk/pci_virtio.o 00:01:38.012 CC lib/env_dpdk/pci_vmd.o 00:01:38.012 CC lib/env_dpdk/pci_idxd.o 00:01:38.012 CC lib/env_dpdk/sigbus_handler.o 00:01:38.012 CC lib/env_dpdk/pci_event.o 00:01:38.012 CC lib/env_dpdk/pci_dpdk.o 00:01:38.012 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:38.012 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:38.012 SYMLINK libspdk_trace_parser.so 00:01:38.012 LIB libspdk_rdma_provider.a 00:01:38.270 SO libspdk_rdma_provider.so.6.0 00:01:38.270 LIB libspdk_rdma_utils.a 00:01:38.270 SYMLINK libspdk_rdma_provider.so 00:01:38.270 LIB libspdk_json.a 00:01:38.270 SO libspdk_rdma_utils.so.1.0 00:01:38.270 LIB libspdk_conf.a 00:01:38.270 SO libspdk_json.so.6.0 00:01:38.270 SO libspdk_conf.so.6.0 00:01:38.270 SYMLINK libspdk_rdma_utils.so 00:01:38.270 SYMLINK libspdk_conf.so 00:01:38.270 SYMLINK libspdk_json.so 00:01:38.528 CC lib/jsonrpc/jsonrpc_server.o 00:01:38.528 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:38.528 CC lib/jsonrpc/jsonrpc_client.o 00:01:38.528 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:38.528 LIB libspdk_idxd.a 00:01:38.528 SO libspdk_idxd.so.12.0 00:01:38.528 SYMLINK libspdk_idxd.so 00:01:38.528 LIB libspdk_vmd.a 00:01:38.528 SO libspdk_vmd.so.6.0 00:01:38.785 SYMLINK libspdk_vmd.so 00:01:38.785 LIB libspdk_jsonrpc.a 00:01:38.785 SO libspdk_jsonrpc.so.6.0 00:01:38.785 SYMLINK libspdk_jsonrpc.so 00:01:39.043 CC lib/rpc/rpc.o 00:01:39.301 LIB libspdk_rpc.a 00:01:39.301 SO libspdk_rpc.so.6.0 00:01:39.301 SYMLINK libspdk_rpc.so 00:01:39.559 CC lib/trace/trace.o 00:01:39.559 CC lib/trace/trace_flags.o 00:01:39.559 CC lib/trace/trace_rpc.o 00:01:39.559 CC lib/keyring/keyring.o 00:01:39.559 CC lib/keyring/keyring_rpc.o 00:01:39.559 CC lib/notify/notify.o 00:01:39.559 CC lib/notify/notify_rpc.o 00:01:39.559 LIB libspdk_notify.a 00:01:39.818 SO libspdk_notify.so.6.0 00:01:39.818 LIB libspdk_keyring.a 00:01:39.818 SYMLINK libspdk_notify.so 00:01:39.818 LIB libspdk_trace.a 00:01:39.818 SO libspdk_keyring.so.1.0 00:01:39.818 SO libspdk_trace.so.10.0 00:01:39.818 SYMLINK libspdk_keyring.so 00:01:39.818 SYMLINK libspdk_trace.so 00:01:40.075 LIB libspdk_env_dpdk.a 00:01:40.075 CC lib/sock/sock.o 00:01:40.075 CC lib/sock/sock_rpc.o 00:01:40.075 CC lib/thread/thread.o 00:01:40.075 CC lib/thread/iobuf.o 00:01:40.075 SO libspdk_env_dpdk.so.15.0 00:01:40.075 SYMLINK libspdk_env_dpdk.so 00:01:40.333 LIB libspdk_sock.a 00:01:40.333 SO libspdk_sock.so.10.0 00:01:40.592 SYMLINK libspdk_sock.so 00:01:40.592 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:40.592 CC lib/nvme/nvme_ctrlr.o 00:01:40.592 CC lib/nvme/nvme_fabric.o 00:01:40.592 CC lib/nvme/nvme_ns_cmd.o 00:01:40.592 CC lib/nvme/nvme_ns.o 00:01:40.592 CC lib/nvme/nvme_pcie_common.o 00:01:40.592 CC lib/nvme/nvme_pcie.o 00:01:40.592 CC lib/nvme/nvme_qpair.o 00:01:40.592 CC lib/nvme/nvme.o 00:01:40.592 CC lib/nvme/nvme_quirks.o 00:01:40.592 CC lib/nvme/nvme_transport.o 00:01:40.592 CC lib/nvme/nvme_discovery.o 00:01:40.592 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:40.592 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:40.592 CC lib/nvme/nvme_tcp.o 00:01:40.592 CC lib/nvme/nvme_opal.o 00:01:40.592 CC lib/nvme/nvme_io_msg.o 00:01:40.592 CC lib/nvme/nvme_poll_group.o 00:01:40.592 CC lib/nvme/nvme_zns.o 00:01:40.592 CC lib/nvme/nvme_stubs.o 00:01:40.592 CC lib/nvme/nvme_auth.o 00:01:40.592 CC lib/nvme/nvme_cuse.o 00:01:40.592 CC lib/nvme/nvme_vfio_user.o 00:01:40.592 CC lib/nvme/nvme_rdma.o 00:01:41.527 LIB libspdk_thread.a 00:01:41.527 SO libspdk_thread.so.10.1 00:01:41.527 SYMLINK libspdk_thread.so 00:01:41.785 CC lib/vfu_tgt/tgt_endpoint.o 00:01:41.785 CC lib/virtio/virtio.o 00:01:41.785 CC lib/blob/blobstore.o 00:01:41.785 CC lib/accel/accel.o 00:01:41.785 CC lib/init/json_config.o 00:01:41.785 CC lib/virtio/virtio_vhost_user.o 00:01:41.785 CC lib/vfu_tgt/tgt_rpc.o 00:01:41.785 CC lib/blob/request.o 00:01:41.785 CC lib/accel/accel_rpc.o 00:01:41.785 CC lib/init/subsystem.o 00:01:41.785 CC lib/blob/zeroes.o 00:01:41.785 CC lib/init/subsystem_rpc.o 00:01:41.785 CC lib/accel/accel_sw.o 00:01:41.785 CC lib/virtio/virtio_vfio_user.o 00:01:41.785 CC lib/blob/blob_bs_dev.o 00:01:41.785 CC lib/init/rpc.o 00:01:41.785 CC lib/virtio/virtio_pci.o 00:01:42.043 LIB libspdk_init.a 00:01:42.043 SO libspdk_init.so.5.0 00:01:42.043 LIB libspdk_virtio.a 00:01:42.043 LIB libspdk_vfu_tgt.a 00:01:42.043 SYMLINK libspdk_init.so 00:01:42.301 SO libspdk_virtio.so.7.0 00:01:42.301 SO libspdk_vfu_tgt.so.3.0 00:01:42.301 SYMLINK libspdk_vfu_tgt.so 00:01:42.301 SYMLINK libspdk_virtio.so 00:01:42.301 CC lib/event/app.o 00:01:42.301 CC lib/event/reactor.o 00:01:42.301 CC lib/event/log_rpc.o 00:01:42.301 CC lib/event/app_rpc.o 00:01:42.301 CC lib/event/scheduler_static.o 00:01:42.864 LIB libspdk_event.a 00:01:42.864 SO libspdk_event.so.14.0 00:01:42.864 SYMLINK libspdk_event.so 00:01:42.864 LIB libspdk_accel.a 00:01:42.864 SO libspdk_accel.so.16.0 00:01:42.864 SYMLINK libspdk_accel.so 00:01:43.120 LIB libspdk_nvme.a 00:01:43.120 CC lib/bdev/bdev.o 00:01:43.120 CC lib/bdev/bdev_rpc.o 00:01:43.120 CC lib/bdev/bdev_zone.o 00:01:43.120 CC lib/bdev/part.o 00:01:43.120 CC lib/bdev/scsi_nvme.o 00:01:43.120 SO libspdk_nvme.so.13.1 00:01:43.377 SYMLINK libspdk_nvme.so 00:01:44.750 LIB libspdk_blob.a 00:01:44.750 SO libspdk_blob.so.11.0 00:01:44.750 SYMLINK libspdk_blob.so 00:01:45.008 CC lib/lvol/lvol.o 00:01:45.008 CC lib/blobfs/blobfs.o 00:01:45.008 CC lib/blobfs/tree.o 00:01:45.573 LIB libspdk_bdev.a 00:01:45.573 SO libspdk_bdev.so.16.0 00:01:45.836 SYMLINK libspdk_bdev.so 00:01:45.836 LIB libspdk_blobfs.a 00:01:45.836 LIB libspdk_lvol.a 00:01:45.836 SO libspdk_blobfs.so.10.0 00:01:45.836 SO libspdk_lvol.so.10.0 00:01:45.836 CC lib/scsi/dev.o 00:01:45.836 CC lib/scsi/lun.o 00:01:45.836 CC lib/ftl/ftl_core.o 00:01:45.836 CC lib/scsi/port.o 00:01:45.836 CC lib/nbd/nbd.o 00:01:45.836 CC lib/nvmf/ctrlr.o 00:01:45.836 CC lib/scsi/scsi.o 00:01:45.836 CC lib/ublk/ublk.o 00:01:45.836 CC lib/ftl/ftl_init.o 00:01:45.836 CC lib/nvmf/ctrlr_discovery.o 00:01:45.836 CC lib/ublk/ublk_rpc.o 00:01:45.836 CC lib/nbd/nbd_rpc.o 00:01:45.836 CC lib/ftl/ftl_layout.o 00:01:45.836 CC lib/scsi/scsi_bdev.o 00:01:45.836 CC lib/nvmf/ctrlr_bdev.o 00:01:45.836 CC lib/ftl/ftl_debug.o 00:01:45.836 CC lib/scsi/scsi_pr.o 00:01:45.836 CC lib/scsi/scsi_rpc.o 00:01:45.836 CC lib/ftl/ftl_io.o 00:01:45.836 CC lib/nvmf/subsystem.o 00:01:45.836 CC lib/nvmf/nvmf.o 00:01:45.836 CC lib/scsi/task.o 00:01:45.836 CC lib/ftl/ftl_sb.o 00:01:45.836 CC lib/ftl/ftl_l2p.o 00:01:45.836 CC lib/nvmf/nvmf_rpc.o 00:01:45.836 CC lib/nvmf/transport.o 00:01:45.836 CC lib/ftl/ftl_l2p_flat.o 00:01:45.836 CC lib/ftl/ftl_nv_cache.o 00:01:45.836 CC lib/nvmf/tcp.o 00:01:45.836 CC lib/ftl/ftl_band.o 00:01:45.836 CC lib/nvmf/stubs.o 00:01:45.836 CC lib/ftl/ftl_band_ops.o 00:01:45.836 CC lib/nvmf/mdns_server.o 00:01:45.836 CC lib/nvmf/vfio_user.o 00:01:45.836 CC lib/ftl/ftl_writer.o 00:01:45.836 CC lib/nvmf/rdma.o 00:01:45.836 CC lib/ftl/ftl_rq.o 00:01:45.836 CC lib/nvmf/auth.o 00:01:45.836 CC lib/ftl/ftl_reloc.o 00:01:45.836 CC lib/ftl/ftl_l2p_cache.o 00:01:45.836 CC lib/ftl/ftl_p2l.o 00:01:45.836 CC lib/ftl/mngt/ftl_mngt.o 00:01:45.836 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:45.836 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:46.100 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:46.100 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:46.100 SYMLINK libspdk_lvol.so 00:01:46.100 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:46.100 SYMLINK libspdk_blobfs.so 00:01:46.100 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:46.361 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:46.361 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:46.361 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:46.361 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:46.361 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:46.361 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:46.361 CC lib/ftl/utils/ftl_conf.o 00:01:46.361 CC lib/ftl/utils/ftl_md.o 00:01:46.361 CC lib/ftl/utils/ftl_mempool.o 00:01:46.361 CC lib/ftl/utils/ftl_property.o 00:01:46.361 CC lib/ftl/utils/ftl_bitmap.o 00:01:46.361 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:46.361 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:46.361 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:46.361 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:46.361 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:46.361 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:46.621 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:46.621 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:46.621 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:46.621 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:46.621 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:46.621 CC lib/ftl/base/ftl_base_dev.o 00:01:46.621 CC lib/ftl/base/ftl_base_bdev.o 00:01:46.621 CC lib/ftl/ftl_trace.o 00:01:46.879 LIB libspdk_nbd.a 00:01:46.879 SO libspdk_nbd.so.7.0 00:01:46.879 LIB libspdk_scsi.a 00:01:46.879 SYMLINK libspdk_nbd.so 00:01:46.879 SO libspdk_scsi.so.9.0 00:01:47.136 SYMLINK libspdk_scsi.so 00:01:47.136 LIB libspdk_ublk.a 00:01:47.136 SO libspdk_ublk.so.3.0 00:01:47.136 SYMLINK libspdk_ublk.so 00:01:47.136 CC lib/vhost/vhost.o 00:01:47.136 CC lib/iscsi/conn.o 00:01:47.136 CC lib/vhost/vhost_rpc.o 00:01:47.136 CC lib/iscsi/init_grp.o 00:01:47.136 CC lib/vhost/vhost_scsi.o 00:01:47.136 CC lib/iscsi/iscsi.o 00:01:47.136 CC lib/vhost/vhost_blk.o 00:01:47.136 CC lib/iscsi/md5.o 00:01:47.136 CC lib/vhost/rte_vhost_user.o 00:01:47.136 CC lib/iscsi/param.o 00:01:47.136 CC lib/iscsi/portal_grp.o 00:01:47.136 CC lib/iscsi/tgt_node.o 00:01:47.136 CC lib/iscsi/iscsi_subsystem.o 00:01:47.136 CC lib/iscsi/iscsi_rpc.o 00:01:47.136 CC lib/iscsi/task.o 00:01:47.395 LIB libspdk_ftl.a 00:01:47.653 SO libspdk_ftl.so.9.0 00:01:47.910 SYMLINK libspdk_ftl.so 00:01:48.475 LIB libspdk_vhost.a 00:01:48.475 SO libspdk_vhost.so.8.0 00:01:48.475 LIB libspdk_nvmf.a 00:01:48.475 SYMLINK libspdk_vhost.so 00:01:48.475 SO libspdk_nvmf.so.19.0 00:01:48.475 LIB libspdk_iscsi.a 00:01:48.734 SO libspdk_iscsi.so.8.0 00:01:48.734 SYMLINK libspdk_nvmf.so 00:01:48.734 SYMLINK libspdk_iscsi.so 00:01:48.993 CC module/env_dpdk/env_dpdk_rpc.o 00:01:48.993 CC module/vfu_device/vfu_virtio.o 00:01:48.993 CC module/vfu_device/vfu_virtio_blk.o 00:01:48.993 CC module/vfu_device/vfu_virtio_scsi.o 00:01:48.993 CC module/vfu_device/vfu_virtio_rpc.o 00:01:49.251 CC module/keyring/linux/keyring.o 00:01:49.251 CC module/keyring/linux/keyring_rpc.o 00:01:49.251 CC module/scheduler/gscheduler/gscheduler.o 00:01:49.251 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:49.251 CC module/keyring/file/keyring.o 00:01:49.251 CC module/blob/bdev/blob_bdev.o 00:01:49.251 CC module/keyring/file/keyring_rpc.o 00:01:49.251 CC module/accel/dsa/accel_dsa.o 00:01:49.251 CC module/accel/ioat/accel_ioat.o 00:01:49.251 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:49.251 CC module/accel/error/accel_error.o 00:01:49.251 CC module/accel/dsa/accel_dsa_rpc.o 00:01:49.251 CC module/accel/ioat/accel_ioat_rpc.o 00:01:49.251 CC module/accel/error/accel_error_rpc.o 00:01:49.251 CC module/accel/iaa/accel_iaa.o 00:01:49.251 CC module/accel/iaa/accel_iaa_rpc.o 00:01:49.251 CC module/sock/posix/posix.o 00:01:49.251 LIB libspdk_env_dpdk_rpc.a 00:01:49.251 SO libspdk_env_dpdk_rpc.so.6.0 00:01:49.251 SYMLINK libspdk_env_dpdk_rpc.so 00:01:49.251 LIB libspdk_keyring_linux.a 00:01:49.251 LIB libspdk_scheduler_gscheduler.a 00:01:49.251 LIB libspdk_scheduler_dpdk_governor.a 00:01:49.251 SO libspdk_keyring_linux.so.1.0 00:01:49.251 SO libspdk_scheduler_gscheduler.so.4.0 00:01:49.251 LIB libspdk_keyring_file.a 00:01:49.251 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:49.251 LIB libspdk_accel_error.a 00:01:49.251 LIB libspdk_accel_ioat.a 00:01:49.251 LIB libspdk_scheduler_dynamic.a 00:01:49.508 SO libspdk_keyring_file.so.1.0 00:01:49.508 LIB libspdk_accel_iaa.a 00:01:49.509 SO libspdk_accel_error.so.2.0 00:01:49.509 SYMLINK libspdk_keyring_linux.so 00:01:49.509 SO libspdk_accel_ioat.so.6.0 00:01:49.509 SYMLINK libspdk_scheduler_gscheduler.so 00:01:49.509 SO libspdk_scheduler_dynamic.so.4.0 00:01:49.509 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:49.509 SO libspdk_accel_iaa.so.3.0 00:01:49.509 SYMLINK libspdk_keyring_file.so 00:01:49.509 LIB libspdk_accel_dsa.a 00:01:49.509 SYMLINK libspdk_accel_error.so 00:01:49.509 SYMLINK libspdk_scheduler_dynamic.so 00:01:49.509 SYMLINK libspdk_accel_ioat.so 00:01:49.509 LIB libspdk_blob_bdev.a 00:01:49.509 SO libspdk_accel_dsa.so.5.0 00:01:49.509 SYMLINK libspdk_accel_iaa.so 00:01:49.509 SO libspdk_blob_bdev.so.11.0 00:01:49.509 SYMLINK libspdk_accel_dsa.so 00:01:49.509 SYMLINK libspdk_blob_bdev.so 00:01:49.768 LIB libspdk_vfu_device.a 00:01:49.768 SO libspdk_vfu_device.so.3.0 00:01:49.768 CC module/bdev/lvol/vbdev_lvol.o 00:01:49.768 CC module/bdev/gpt/gpt.o 00:01:49.768 CC module/bdev/delay/vbdev_delay.o 00:01:49.768 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:49.768 CC module/bdev/gpt/vbdev_gpt.o 00:01:49.768 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:49.768 CC module/bdev/error/vbdev_error.o 00:01:49.768 CC module/bdev/null/bdev_null.o 00:01:49.768 CC module/bdev/iscsi/bdev_iscsi.o 00:01:49.768 CC module/bdev/error/vbdev_error_rpc.o 00:01:49.768 CC module/bdev/null/bdev_null_rpc.o 00:01:49.768 CC module/bdev/ftl/bdev_ftl.o 00:01:49.768 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:49.768 CC module/blobfs/bdev/blobfs_bdev.o 00:01:49.768 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:49.768 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:49.768 CC module/bdev/passthru/vbdev_passthru.o 00:01:49.768 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:49.768 CC module/bdev/split/vbdev_split.o 00:01:49.768 CC module/bdev/nvme/bdev_nvme.o 00:01:49.768 CC module/bdev/raid/bdev_raid.o 00:01:49.768 CC module/bdev/malloc/bdev_malloc.o 00:01:49.768 CC module/bdev/nvme/nvme_rpc.o 00:01:49.768 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:49.768 CC module/bdev/raid/bdev_raid_rpc.o 00:01:49.768 CC module/bdev/split/vbdev_split_rpc.o 00:01:49.768 CC module/bdev/raid/bdev_raid_sb.o 00:01:49.768 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:49.768 CC module/bdev/raid/raid0.o 00:01:49.768 CC module/bdev/nvme/bdev_mdns_client.o 00:01:49.768 CC module/bdev/raid/raid1.o 00:01:49.768 CC module/bdev/nvme/vbdev_opal.o 00:01:49.768 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:49.768 CC module/bdev/raid/concat.o 00:01:49.768 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:49.768 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:49.768 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:49.768 CC module/bdev/aio/bdev_aio.o 00:01:49.768 CC module/bdev/aio/bdev_aio_rpc.o 00:01:49.768 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:49.769 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:49.769 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:50.028 SYMLINK libspdk_vfu_device.so 00:01:50.028 LIB libspdk_sock_posix.a 00:01:50.028 SO libspdk_sock_posix.so.6.0 00:01:50.028 SYMLINK libspdk_sock_posix.so 00:01:50.286 LIB libspdk_blobfs_bdev.a 00:01:50.286 SO libspdk_blobfs_bdev.so.6.0 00:01:50.286 LIB libspdk_bdev_error.a 00:01:50.286 LIB libspdk_bdev_split.a 00:01:50.286 SYMLINK libspdk_blobfs_bdev.so 00:01:50.286 SO libspdk_bdev_error.so.6.0 00:01:50.286 SO libspdk_bdev_split.so.6.0 00:01:50.286 LIB libspdk_bdev_ftl.a 00:01:50.286 LIB libspdk_bdev_null.a 00:01:50.286 LIB libspdk_bdev_passthru.a 00:01:50.286 SO libspdk_bdev_ftl.so.6.0 00:01:50.286 LIB libspdk_bdev_gpt.a 00:01:50.286 SYMLINK libspdk_bdev_error.so 00:01:50.286 SO libspdk_bdev_null.so.6.0 00:01:50.286 SO libspdk_bdev_passthru.so.6.0 00:01:50.286 SYMLINK libspdk_bdev_split.so 00:01:50.286 LIB libspdk_bdev_zone_block.a 00:01:50.286 SO libspdk_bdev_gpt.so.6.0 00:01:50.286 LIB libspdk_bdev_iscsi.a 00:01:50.286 LIB libspdk_bdev_aio.a 00:01:50.286 SYMLINK libspdk_bdev_ftl.so 00:01:50.286 SO libspdk_bdev_zone_block.so.6.0 00:01:50.286 SO libspdk_bdev_iscsi.so.6.0 00:01:50.286 SYMLINK libspdk_bdev_null.so 00:01:50.286 SYMLINK libspdk_bdev_passthru.so 00:01:50.286 SO libspdk_bdev_aio.so.6.0 00:01:50.286 LIB libspdk_bdev_delay.a 00:01:50.556 SYMLINK libspdk_bdev_gpt.so 00:01:50.556 LIB libspdk_bdev_malloc.a 00:01:50.556 SO libspdk_bdev_delay.so.6.0 00:01:50.556 SYMLINK libspdk_bdev_zone_block.so 00:01:50.556 SYMLINK libspdk_bdev_iscsi.so 00:01:50.556 SO libspdk_bdev_malloc.so.6.0 00:01:50.556 SYMLINK libspdk_bdev_aio.so 00:01:50.556 SYMLINK libspdk_bdev_delay.so 00:01:50.556 SYMLINK libspdk_bdev_malloc.so 00:01:50.556 LIB libspdk_bdev_virtio.a 00:01:50.556 LIB libspdk_bdev_lvol.a 00:01:50.556 SO libspdk_bdev_lvol.so.6.0 00:01:50.556 SO libspdk_bdev_virtio.so.6.0 00:01:50.556 SYMLINK libspdk_bdev_lvol.so 00:01:50.556 SYMLINK libspdk_bdev_virtio.so 00:01:50.841 LIB libspdk_bdev_raid.a 00:01:51.099 SO libspdk_bdev_raid.so.6.0 00:01:51.099 SYMLINK libspdk_bdev_raid.so 00:01:52.470 LIB libspdk_bdev_nvme.a 00:01:52.470 SO libspdk_bdev_nvme.so.7.0 00:01:52.470 SYMLINK libspdk_bdev_nvme.so 00:01:52.729 CC module/event/subsystems/vmd/vmd.o 00:01:52.729 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:52.729 CC module/event/subsystems/sock/sock.o 00:01:52.729 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:52.729 CC module/event/subsystems/keyring/keyring.o 00:01:52.729 CC module/event/subsystems/iobuf/iobuf.o 00:01:52.729 CC module/event/subsystems/scheduler/scheduler.o 00:01:52.729 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:52.729 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:52.729 LIB libspdk_event_keyring.a 00:01:52.729 LIB libspdk_event_scheduler.a 00:01:52.729 LIB libspdk_event_vmd.a 00:01:52.729 LIB libspdk_event_vhost_blk.a 00:01:52.988 LIB libspdk_event_vfu_tgt.a 00:01:52.988 LIB libspdk_event_sock.a 00:01:52.988 SO libspdk_event_keyring.so.1.0 00:01:52.988 SO libspdk_event_scheduler.so.4.0 00:01:52.988 SO libspdk_event_vhost_blk.so.3.0 00:01:52.988 LIB libspdk_event_iobuf.a 00:01:52.988 SO libspdk_event_vfu_tgt.so.3.0 00:01:52.988 SO libspdk_event_vmd.so.6.0 00:01:52.988 SO libspdk_event_sock.so.5.0 00:01:52.988 SO libspdk_event_iobuf.so.3.0 00:01:52.988 SYMLINK libspdk_event_keyring.so 00:01:52.988 SYMLINK libspdk_event_scheduler.so 00:01:52.988 SYMLINK libspdk_event_vhost_blk.so 00:01:52.988 SYMLINK libspdk_event_vfu_tgt.so 00:01:52.988 SYMLINK libspdk_event_sock.so 00:01:52.988 SYMLINK libspdk_event_vmd.so 00:01:52.988 SYMLINK libspdk_event_iobuf.so 00:01:53.245 CC module/event/subsystems/accel/accel.o 00:01:53.245 LIB libspdk_event_accel.a 00:01:53.245 SO libspdk_event_accel.so.6.0 00:01:53.245 SYMLINK libspdk_event_accel.so 00:01:53.503 CC module/event/subsystems/bdev/bdev.o 00:01:53.761 LIB libspdk_event_bdev.a 00:01:53.761 SO libspdk_event_bdev.so.6.0 00:01:53.761 SYMLINK libspdk_event_bdev.so 00:01:54.019 CC module/event/subsystems/ublk/ublk.o 00:01:54.019 CC module/event/subsystems/nbd/nbd.o 00:01:54.019 CC module/event/subsystems/scsi/scsi.o 00:01:54.019 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:54.019 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:54.019 LIB libspdk_event_ublk.a 00:01:54.019 LIB libspdk_event_nbd.a 00:01:54.019 LIB libspdk_event_scsi.a 00:01:54.019 SO libspdk_event_ublk.so.3.0 00:01:54.019 SO libspdk_event_nbd.so.6.0 00:01:54.019 SO libspdk_event_scsi.so.6.0 00:01:54.276 SYMLINK libspdk_event_ublk.so 00:01:54.276 SYMLINK libspdk_event_nbd.so 00:01:54.276 LIB libspdk_event_nvmf.a 00:01:54.276 SYMLINK libspdk_event_scsi.so 00:01:54.276 SO libspdk_event_nvmf.so.6.0 00:01:54.276 SYMLINK libspdk_event_nvmf.so 00:01:54.276 CC module/event/subsystems/iscsi/iscsi.o 00:01:54.276 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:54.534 LIB libspdk_event_vhost_scsi.a 00:01:54.534 LIB libspdk_event_iscsi.a 00:01:54.534 SO libspdk_event_vhost_scsi.so.3.0 00:01:54.534 SO libspdk_event_iscsi.so.6.0 00:01:54.534 SYMLINK libspdk_event_vhost_scsi.so 00:01:54.534 SYMLINK libspdk_event_iscsi.so 00:01:54.792 SO libspdk.so.6.0 00:01:54.792 SYMLINK libspdk.so 00:01:54.792 CC app/trace_record/trace_record.o 00:01:54.792 CC app/spdk_top/spdk_top.o 00:01:54.792 CC app/spdk_lspci/spdk_lspci.o 00:01:54.792 CXX app/trace/trace.o 00:01:54.792 CC app/spdk_nvme_identify/identify.o 00:01:54.792 CC app/spdk_nvme_discover/discovery_aer.o 00:01:54.792 TEST_HEADER include/spdk/accel.h 00:01:54.792 CC test/rpc_client/rpc_client_test.o 00:01:54.792 CC app/spdk_nvme_perf/perf.o 00:01:54.792 TEST_HEADER include/spdk/accel_module.h 00:01:54.792 TEST_HEADER include/spdk/assert.h 00:01:54.792 TEST_HEADER include/spdk/barrier.h 00:01:54.792 TEST_HEADER include/spdk/base64.h 00:01:54.792 TEST_HEADER include/spdk/bdev_module.h 00:01:54.792 TEST_HEADER include/spdk/bdev.h 00:01:54.792 TEST_HEADER include/spdk/bdev_zone.h 00:01:54.792 TEST_HEADER include/spdk/bit_array.h 00:01:54.792 TEST_HEADER include/spdk/bit_pool.h 00:01:54.792 TEST_HEADER include/spdk/blob_bdev.h 00:01:54.792 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:54.792 TEST_HEADER include/spdk/blobfs.h 00:01:54.792 TEST_HEADER include/spdk/blob.h 00:01:54.792 TEST_HEADER include/spdk/conf.h 00:01:54.792 TEST_HEADER include/spdk/config.h 00:01:54.792 TEST_HEADER include/spdk/cpuset.h 00:01:54.792 TEST_HEADER include/spdk/crc16.h 00:01:54.792 TEST_HEADER include/spdk/crc32.h 00:01:54.792 TEST_HEADER include/spdk/crc64.h 00:01:54.792 TEST_HEADER include/spdk/dif.h 00:01:54.792 TEST_HEADER include/spdk/dma.h 00:01:54.792 TEST_HEADER include/spdk/endian.h 00:01:54.792 TEST_HEADER include/spdk/env_dpdk.h 00:01:54.792 TEST_HEADER include/spdk/env.h 00:01:54.792 TEST_HEADER include/spdk/event.h 00:01:54.792 TEST_HEADER include/spdk/fd_group.h 00:01:54.792 TEST_HEADER include/spdk/fd.h 00:01:54.792 TEST_HEADER include/spdk/file.h 00:01:54.792 TEST_HEADER include/spdk/ftl.h 00:01:54.792 TEST_HEADER include/spdk/gpt_spec.h 00:01:54.792 TEST_HEADER include/spdk/hexlify.h 00:01:54.792 TEST_HEADER include/spdk/histogram_data.h 00:01:54.792 TEST_HEADER include/spdk/idxd.h 00:01:54.792 TEST_HEADER include/spdk/idxd_spec.h 00:01:54.793 TEST_HEADER include/spdk/init.h 00:01:54.793 TEST_HEADER include/spdk/ioat.h 00:01:54.793 TEST_HEADER include/spdk/ioat_spec.h 00:01:54.793 TEST_HEADER include/spdk/iscsi_spec.h 00:01:54.793 TEST_HEADER include/spdk/json.h 00:01:54.793 TEST_HEADER include/spdk/jsonrpc.h 00:01:54.793 TEST_HEADER include/spdk/keyring.h 00:01:54.793 TEST_HEADER include/spdk/keyring_module.h 00:01:54.793 TEST_HEADER include/spdk/likely.h 00:01:54.793 TEST_HEADER include/spdk/log.h 00:01:54.793 TEST_HEADER include/spdk/lvol.h 00:01:54.793 TEST_HEADER include/spdk/memory.h 00:01:54.793 TEST_HEADER include/spdk/mmio.h 00:01:55.058 TEST_HEADER include/spdk/nbd.h 00:01:55.058 TEST_HEADER include/spdk/net.h 00:01:55.058 TEST_HEADER include/spdk/notify.h 00:01:55.058 TEST_HEADER include/spdk/nvme.h 00:01:55.058 TEST_HEADER include/spdk/nvme_intel.h 00:01:55.058 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:55.058 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:55.058 TEST_HEADER include/spdk/nvme_spec.h 00:01:55.058 TEST_HEADER include/spdk/nvme_zns.h 00:01:55.058 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:55.058 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:55.058 TEST_HEADER include/spdk/nvmf.h 00:01:55.058 TEST_HEADER include/spdk/nvmf_spec.h 00:01:55.058 TEST_HEADER include/spdk/opal.h 00:01:55.058 TEST_HEADER include/spdk/nvmf_transport.h 00:01:55.058 TEST_HEADER include/spdk/opal_spec.h 00:01:55.058 TEST_HEADER include/spdk/pci_ids.h 00:01:55.058 TEST_HEADER include/spdk/pipe.h 00:01:55.058 TEST_HEADER include/spdk/queue.h 00:01:55.058 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:55.058 TEST_HEADER include/spdk/reduce.h 00:01:55.058 TEST_HEADER include/spdk/rpc.h 00:01:55.058 TEST_HEADER include/spdk/scheduler.h 00:01:55.058 TEST_HEADER include/spdk/scsi.h 00:01:55.058 TEST_HEADER include/spdk/scsi_spec.h 00:01:55.058 TEST_HEADER include/spdk/sock.h 00:01:55.058 TEST_HEADER include/spdk/stdinc.h 00:01:55.058 TEST_HEADER include/spdk/string.h 00:01:55.058 TEST_HEADER include/spdk/thread.h 00:01:55.058 TEST_HEADER include/spdk/trace.h 00:01:55.058 TEST_HEADER include/spdk/trace_parser.h 00:01:55.058 TEST_HEADER include/spdk/tree.h 00:01:55.058 TEST_HEADER include/spdk/util.h 00:01:55.058 TEST_HEADER include/spdk/ublk.h 00:01:55.058 TEST_HEADER include/spdk/uuid.h 00:01:55.058 TEST_HEADER include/spdk/version.h 00:01:55.058 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:55.058 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:55.058 TEST_HEADER include/spdk/vhost.h 00:01:55.058 TEST_HEADER include/spdk/vmd.h 00:01:55.058 TEST_HEADER include/spdk/xor.h 00:01:55.058 TEST_HEADER include/spdk/zipf.h 00:01:55.058 CXX test/cpp_headers/accel.o 00:01:55.058 CXX test/cpp_headers/accel_module.o 00:01:55.058 CXX test/cpp_headers/assert.o 00:01:55.058 CC app/iscsi_tgt/iscsi_tgt.o 00:01:55.058 CXX test/cpp_headers/barrier.o 00:01:55.058 CXX test/cpp_headers/base64.o 00:01:55.058 CXX test/cpp_headers/bdev.o 00:01:55.058 CXX test/cpp_headers/bdev_module.o 00:01:55.058 CXX test/cpp_headers/bdev_zone.o 00:01:55.058 CXX test/cpp_headers/bit_pool.o 00:01:55.058 CXX test/cpp_headers/bit_array.o 00:01:55.058 CXX test/cpp_headers/blob_bdev.o 00:01:55.058 CXX test/cpp_headers/blobfs_bdev.o 00:01:55.058 CXX test/cpp_headers/blobfs.o 00:01:55.058 CXX test/cpp_headers/blob.o 00:01:55.058 CXX test/cpp_headers/config.o 00:01:55.058 CXX test/cpp_headers/conf.o 00:01:55.058 CC app/spdk_dd/spdk_dd.o 00:01:55.058 CXX test/cpp_headers/cpuset.o 00:01:55.058 CC app/nvmf_tgt/nvmf_main.o 00:01:55.058 CXX test/cpp_headers/crc16.o 00:01:55.058 CXX test/cpp_headers/crc32.o 00:01:55.058 CC examples/util/zipf/zipf.o 00:01:55.058 CC app/spdk_tgt/spdk_tgt.o 00:01:55.058 CC test/app/jsoncat/jsoncat.o 00:01:55.058 CC test/env/vtophys/vtophys.o 00:01:55.058 CC test/env/pci/pci_ut.o 00:01:55.058 CC test/app/histogram_perf/histogram_perf.o 00:01:55.058 CC test/env/memory/memory_ut.o 00:01:55.058 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:55.058 CC examples/ioat/verify/verify.o 00:01:55.058 CC examples/ioat/perf/perf.o 00:01:55.058 CC test/thread/poller_perf/poller_perf.o 00:01:55.058 CC test/app/stub/stub.o 00:01:55.058 CC app/fio/nvme/fio_plugin.o 00:01:55.058 CC app/fio/bdev/fio_plugin.o 00:01:55.058 CC test/dma/test_dma/test_dma.o 00:01:55.058 CC test/app/bdev_svc/bdev_svc.o 00:01:55.320 LINK spdk_lspci 00:01:55.320 CC test/env/mem_callbacks/mem_callbacks.o 00:01:55.320 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:55.320 LINK rpc_client_test 00:01:55.320 LINK spdk_nvme_discover 00:01:55.320 LINK zipf 00:01:55.320 LINK jsoncat 00:01:55.320 LINK poller_perf 00:01:55.320 CXX test/cpp_headers/crc64.o 00:01:55.320 LINK histogram_perf 00:01:55.320 LINK vtophys 00:01:55.320 CXX test/cpp_headers/dif.o 00:01:55.320 LINK spdk_trace_record 00:01:55.320 LINK interrupt_tgt 00:01:55.320 CXX test/cpp_headers/dma.o 00:01:55.320 LINK env_dpdk_post_init 00:01:55.320 CXX test/cpp_headers/endian.o 00:01:55.320 CXX test/cpp_headers/env_dpdk.o 00:01:55.320 LINK nvmf_tgt 00:01:55.320 CXX test/cpp_headers/env.o 00:01:55.320 CXX test/cpp_headers/event.o 00:01:55.320 CXX test/cpp_headers/fd_group.o 00:01:55.320 CXX test/cpp_headers/fd.o 00:01:55.320 CXX test/cpp_headers/file.o 00:01:55.320 LINK stub 00:01:55.320 CXX test/cpp_headers/ftl.o 00:01:55.320 CXX test/cpp_headers/gpt_spec.o 00:01:55.582 LINK iscsi_tgt 00:01:55.582 CXX test/cpp_headers/hexlify.o 00:01:55.582 CXX test/cpp_headers/histogram_data.o 00:01:55.582 CXX test/cpp_headers/idxd.o 00:01:55.582 LINK ioat_perf 00:01:55.582 LINK bdev_svc 00:01:55.582 CXX test/cpp_headers/idxd_spec.o 00:01:55.582 LINK spdk_tgt 00:01:55.582 CXX test/cpp_headers/init.o 00:01:55.582 LINK verify 00:01:55.582 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:55.582 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:55.582 CXX test/cpp_headers/ioat.o 00:01:55.582 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:55.582 CXX test/cpp_headers/ioat_spec.o 00:01:55.582 CXX test/cpp_headers/iscsi_spec.o 00:01:55.582 CXX test/cpp_headers/json.o 00:01:55.582 LINK spdk_dd 00:01:55.851 CXX test/cpp_headers/jsonrpc.o 00:01:55.851 LINK spdk_trace 00:01:55.851 CXX test/cpp_headers/keyring.o 00:01:55.851 CXX test/cpp_headers/keyring_module.o 00:01:55.851 CXX test/cpp_headers/likely.o 00:01:55.851 CXX test/cpp_headers/log.o 00:01:55.851 CXX test/cpp_headers/lvol.o 00:01:55.851 CXX test/cpp_headers/memory.o 00:01:55.851 CXX test/cpp_headers/mmio.o 00:01:55.851 CXX test/cpp_headers/nbd.o 00:01:55.851 CXX test/cpp_headers/net.o 00:01:55.851 CXX test/cpp_headers/notify.o 00:01:55.851 CXX test/cpp_headers/nvme.o 00:01:55.851 CXX test/cpp_headers/nvme_intel.o 00:01:55.851 LINK pci_ut 00:01:55.851 CXX test/cpp_headers/nvme_ocssd.o 00:01:55.851 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:55.851 CXX test/cpp_headers/nvme_zns.o 00:01:55.851 CXX test/cpp_headers/nvme_spec.o 00:01:55.851 CXX test/cpp_headers/nvmf_cmd.o 00:01:55.851 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:55.851 LINK test_dma 00:01:55.851 CXX test/cpp_headers/nvmf.o 00:01:55.851 CXX test/cpp_headers/nvmf_spec.o 00:01:55.851 CXX test/cpp_headers/nvmf_transport.o 00:01:56.120 CXX test/cpp_headers/opal.o 00:01:56.120 CC test/event/event_perf/event_perf.o 00:01:56.120 CC test/event/reactor/reactor.o 00:01:56.120 CXX test/cpp_headers/opal_spec.o 00:01:56.120 CC test/event/reactor_perf/reactor_perf.o 00:01:56.120 CC examples/sock/hello_world/hello_sock.o 00:01:56.120 CXX test/cpp_headers/pci_ids.o 00:01:56.120 CXX test/cpp_headers/pipe.o 00:01:56.120 CC examples/thread/thread/thread_ex.o 00:01:56.120 CC examples/vmd/lsvmd/lsvmd.o 00:01:56.120 CXX test/cpp_headers/queue.o 00:01:56.120 CC examples/idxd/perf/perf.o 00:01:56.120 LINK spdk_nvme 00:01:56.120 LINK nvme_fuzz 00:01:56.120 LINK spdk_bdev 00:01:56.120 CXX test/cpp_headers/reduce.o 00:01:56.120 CC examples/vmd/led/led.o 00:01:56.120 CC test/event/app_repeat/app_repeat.o 00:01:56.120 CXX test/cpp_headers/rpc.o 00:01:56.120 CXX test/cpp_headers/scheduler.o 00:01:56.120 CXX test/cpp_headers/scsi.o 00:01:56.120 CXX test/cpp_headers/scsi_spec.o 00:01:56.120 CXX test/cpp_headers/sock.o 00:01:56.120 CXX test/cpp_headers/stdinc.o 00:01:56.120 CXX test/cpp_headers/string.o 00:01:56.120 CXX test/cpp_headers/thread.o 00:01:56.120 CXX test/cpp_headers/trace.o 00:01:56.120 CXX test/cpp_headers/trace_parser.o 00:01:56.382 CXX test/cpp_headers/tree.o 00:01:56.382 CXX test/cpp_headers/ublk.o 00:01:56.382 CXX test/cpp_headers/util.o 00:01:56.382 CXX test/cpp_headers/uuid.o 00:01:56.382 CXX test/cpp_headers/version.o 00:01:56.382 LINK reactor 00:01:56.382 LINK event_perf 00:01:56.382 CXX test/cpp_headers/vfio_user_pci.o 00:01:56.382 CXX test/cpp_headers/vfio_user_spec.o 00:01:56.382 LINK reactor_perf 00:01:56.382 CXX test/cpp_headers/vhost.o 00:01:56.382 CXX test/cpp_headers/vmd.o 00:01:56.382 CXX test/cpp_headers/xor.o 00:01:56.382 CXX test/cpp_headers/zipf.o 00:01:56.382 LINK lsvmd 00:01:56.382 CC test/event/scheduler/scheduler.o 00:01:56.382 LINK spdk_nvme_perf 00:01:56.382 LINK mem_callbacks 00:01:56.382 LINK vhost_fuzz 00:01:56.382 CC app/vhost/vhost.o 00:01:56.382 LINK app_repeat 00:01:56.382 LINK led 00:01:56.382 LINK spdk_nvme_identify 00:01:56.641 LINK hello_sock 00:01:56.641 LINK spdk_top 00:01:56.641 LINK thread 00:01:56.641 CC test/nvme/sgl/sgl.o 00:01:56.641 CC test/nvme/e2edp/nvme_dp.o 00:01:56.641 CC test/nvme/err_injection/err_injection.o 00:01:56.641 CC test/nvme/startup/startup.o 00:01:56.641 CC test/nvme/reset/reset.o 00:01:56.641 CC test/nvme/overhead/overhead.o 00:01:56.641 CC test/nvme/aer/aer.o 00:01:56.641 CC test/nvme/reserve/reserve.o 00:01:56.641 CC test/nvme/simple_copy/simple_copy.o 00:01:56.641 CC test/blobfs/mkfs/mkfs.o 00:01:56.641 CC test/accel/dif/dif.o 00:01:56.641 CC test/nvme/connect_stress/connect_stress.o 00:01:56.641 CC test/nvme/boot_partition/boot_partition.o 00:01:56.641 CC test/nvme/compliance/nvme_compliance.o 00:01:56.641 CC test/nvme/fused_ordering/fused_ordering.o 00:01:56.641 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:56.641 CC test/nvme/fdp/fdp.o 00:01:56.641 CC test/nvme/cuse/cuse.o 00:01:56.641 CC test/lvol/esnap/esnap.o 00:01:56.641 LINK idxd_perf 00:01:56.899 LINK vhost 00:01:56.899 LINK scheduler 00:01:56.899 LINK err_injection 00:01:56.899 LINK connect_stress 00:01:56.899 LINK reserve 00:01:56.899 LINK startup 00:01:56.899 LINK fused_ordering 00:01:56.899 LINK doorbell_aers 00:01:56.899 LINK reset 00:01:56.899 LINK simple_copy 00:01:56.899 LINK boot_partition 00:01:56.899 CC examples/nvme/abort/abort.o 00:01:56.899 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:56.899 CC examples/nvme/reconnect/reconnect.o 00:01:56.899 CC examples/nvme/hello_world/hello_world.o 00:01:56.899 CC examples/nvme/arbitration/arbitration.o 00:01:56.899 CC examples/nvme/hotplug/hotplug.o 00:01:56.899 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:56.899 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:56.899 LINK sgl 00:01:57.158 LINK overhead 00:01:57.158 CC examples/accel/perf/accel_perf.o 00:01:57.158 LINK mkfs 00:01:57.158 LINK memory_ut 00:01:57.158 LINK nvme_dp 00:01:57.158 CC examples/blob/cli/blobcli.o 00:01:57.158 CC examples/blob/hello_world/hello_blob.o 00:01:57.158 LINK fdp 00:01:57.158 LINK aer 00:01:57.158 LINK nvme_compliance 00:01:57.416 LINK pmr_persistence 00:01:57.416 LINK hotplug 00:01:57.416 LINK cmb_copy 00:01:57.416 LINK hello_world 00:01:57.416 LINK arbitration 00:01:57.416 LINK hello_blob 00:01:57.416 LINK abort 00:01:57.416 LINK dif 00:01:57.416 LINK reconnect 00:01:57.674 LINK nvme_manage 00:01:57.674 LINK accel_perf 00:01:57.674 LINK blobcli 00:01:57.944 LINK iscsi_fuzz 00:01:57.944 CC test/bdev/bdevio/bdevio.o 00:01:57.944 CC examples/bdev/hello_world/hello_bdev.o 00:01:57.944 CC examples/bdev/bdevperf/bdevperf.o 00:01:58.202 LINK hello_bdev 00:01:58.202 LINK cuse 00:01:58.202 LINK bdevio 00:01:58.768 LINK bdevperf 00:01:59.334 CC examples/nvmf/nvmf/nvmf.o 00:01:59.334 LINK nvmf 00:02:01.862 LINK esnap 00:02:02.120 00:02:02.120 real 0m49.178s 00:02:02.120 user 10m5.682s 00:02:02.120 sys 2m27.926s 00:02:02.120 20:29:57 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:02.120 20:29:57 make -- common/autotest_common.sh@10 -- $ set +x 00:02:02.120 ************************************ 00:02:02.120 END TEST make 00:02:02.120 ************************************ 00:02:02.120 20:29:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:02.120 20:29:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:02.120 20:29:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:02.120 20:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.120 20:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:02.120 20:29:57 -- pm/common@44 -- $ pid=1381912 00:02:02.120 20:29:57 -- pm/common@50 -- $ kill -TERM 1381912 00:02:02.120 20:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.120 20:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:02.120 20:29:57 -- pm/common@44 -- $ pid=1381914 00:02:02.120 20:29:57 -- pm/common@50 -- $ kill -TERM 1381914 00:02:02.120 20:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.120 20:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:02.120 20:29:57 -- pm/common@44 -- $ pid=1381916 00:02:02.120 20:29:57 -- pm/common@50 -- $ kill -TERM 1381916 00:02:02.120 20:29:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.120 20:29:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:02.120 20:29:57 -- pm/common@44 -- $ pid=1381945 00:02:02.120 20:29:57 -- pm/common@50 -- $ sudo -E kill -TERM 1381945 00:02:02.379 20:29:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:02.379 20:29:57 -- nvmf/common.sh@7 -- # uname -s 00:02:02.379 20:29:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:02.379 20:29:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:02.379 20:29:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:02.379 20:29:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:02.379 20:29:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:02.379 20:29:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:02.379 20:29:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:02.379 20:29:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:02.379 20:29:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:02.379 20:29:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:02.379 20:29:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:02.379 20:29:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:02.379 20:29:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:02.379 20:29:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:02.379 20:29:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:02.379 20:29:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:02.379 20:29:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:02.379 20:29:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:02.379 20:29:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.379 20:29:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.379 20:29:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.379 20:29:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.379 20:29:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.379 20:29:57 -- paths/export.sh@5 -- # export PATH 00:02:02.379 20:29:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.379 20:29:57 -- nvmf/common.sh@47 -- # : 0 00:02:02.379 20:29:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:02.379 20:29:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:02.379 20:29:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:02.379 20:29:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:02.379 20:29:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:02.379 20:29:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:02.379 20:29:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:02.379 20:29:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:02.379 20:29:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:02.379 20:29:57 -- spdk/autotest.sh@32 -- # uname -s 00:02:02.379 20:29:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:02.379 20:29:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:02.379 20:29:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:02.379 20:29:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:02.379 20:29:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:02.379 20:29:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:02.379 20:29:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:02.379 20:29:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:02.379 20:29:57 -- spdk/autotest.sh@48 -- # udevadm_pid=1437390 00:02:02.379 20:29:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:02.379 20:29:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:02.379 20:29:57 -- pm/common@17 -- # local monitor 00:02:02.379 20:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.379 20:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.379 20:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.379 20:29:57 -- pm/common@21 -- # date +%s 00:02:02.379 20:29:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.379 20:29:57 -- pm/common@21 -- # date +%s 00:02:02.379 20:29:57 -- pm/common@25 -- # sleep 1 00:02:02.379 20:29:57 -- pm/common@21 -- # date +%s 00:02:02.379 20:29:57 -- pm/common@21 -- # date +%s 00:02:02.379 20:29:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721845797 00:02:02.379 20:29:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721845797 00:02:02.379 20:29:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721845797 00:02:02.379 20:29:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721845797 00:02:02.379 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721845797_collect-vmstat.pm.log 00:02:02.379 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721845797_collect-cpu-load.pm.log 00:02:02.379 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721845797_collect-cpu-temp.pm.log 00:02:02.379 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721845797_collect-bmc-pm.bmc.pm.log 00:02:03.312 20:29:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:03.312 20:29:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:03.312 20:29:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:03.312 20:29:58 -- common/autotest_common.sh@10 -- # set +x 00:02:03.312 20:29:58 -- spdk/autotest.sh@59 -- # create_test_list 00:02:03.312 20:29:58 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:03.312 20:29:58 -- common/autotest_common.sh@10 -- # set +x 00:02:03.312 20:29:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:03.312 20:29:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.312 20:29:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.312 20:29:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:03.312 20:29:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.312 20:29:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:03.312 20:29:58 -- common/autotest_common.sh@1455 -- # uname 00:02:03.312 20:29:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:03.312 20:29:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:03.312 20:29:58 -- common/autotest_common.sh@1475 -- # uname 00:02:03.312 20:29:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:03.312 20:29:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:03.312 20:29:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:03.312 20:29:58 -- spdk/autotest.sh@72 -- # hash lcov 00:02:03.312 20:29:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:03.312 20:29:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:03.312 --rc lcov_branch_coverage=1 00:02:03.312 --rc lcov_function_coverage=1 00:02:03.312 --rc genhtml_branch_coverage=1 00:02:03.312 --rc genhtml_function_coverage=1 00:02:03.312 --rc genhtml_legend=1 00:02:03.312 --rc geninfo_all_blocks=1 00:02:03.312 ' 00:02:03.312 20:29:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:03.312 --rc lcov_branch_coverage=1 00:02:03.312 --rc lcov_function_coverage=1 00:02:03.312 --rc genhtml_branch_coverage=1 00:02:03.312 --rc genhtml_function_coverage=1 00:02:03.312 --rc genhtml_legend=1 00:02:03.312 --rc geninfo_all_blocks=1 00:02:03.312 ' 00:02:03.312 20:29:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:03.312 --rc lcov_branch_coverage=1 00:02:03.312 --rc lcov_function_coverage=1 00:02:03.312 --rc genhtml_branch_coverage=1 00:02:03.312 --rc genhtml_function_coverage=1 00:02:03.312 --rc genhtml_legend=1 00:02:03.312 --rc geninfo_all_blocks=1 00:02:03.312 --no-external' 00:02:03.312 20:29:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:03.312 --rc lcov_branch_coverage=1 00:02:03.312 --rc lcov_function_coverage=1 00:02:03.312 --rc genhtml_branch_coverage=1 00:02:03.312 --rc genhtml_function_coverage=1 00:02:03.312 --rc genhtml_legend=1 00:02:03.312 --rc geninfo_all_blocks=1 00:02:03.312 --no-external' 00:02:03.312 20:29:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:03.570 lcov: LCOV version 1.14 00:02:03.570 20:29:58 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:04.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:04.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:04.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:04.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:04.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:04.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:05.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:05.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:05.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:05.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:05.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:05.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:05.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:05.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:05.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:05.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:05.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:05.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:05.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:05.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:05.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:05.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:05.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:05.462 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:20.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:38.429 20:30:32 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:38.429 20:30:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:38.429 20:30:32 -- common/autotest_common.sh@10 -- # set +x 00:02:38.429 20:30:32 -- spdk/autotest.sh@91 -- # rm -f 00:02:38.429 20:30:32 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.429 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:38.429 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:38.429 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:38.429 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:38.429 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:38.429 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:38.429 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:38.429 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:38.429 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:38.429 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:38.429 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:38.429 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:38.429 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:38.429 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:38.429 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:38.429 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:38.429 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:38.429 20:30:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:38.429 20:30:33 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:38.429 20:30:33 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:38.429 20:30:33 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:38.429 20:30:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:38.429 20:30:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:38.429 20:30:33 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:38.429 20:30:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.429 20:30:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:38.429 20:30:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:38.429 20:30:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:38.429 20:30:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:38.429 20:30:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:38.429 20:30:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:38.429 20:30:33 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:38.429 No valid GPT data, bailing 00:02:38.429 20:30:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:38.429 20:30:33 -- scripts/common.sh@391 -- # pt= 00:02:38.429 20:30:33 -- scripts/common.sh@392 -- # return 1 00:02:38.429 20:30:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:38.429 1+0 records in 00:02:38.429 1+0 records out 00:02:38.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00185115 s, 566 MB/s 00:02:38.429 20:30:33 -- spdk/autotest.sh@118 -- # sync 00:02:38.429 20:30:33 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:38.429 20:30:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:38.429 20:30:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:40.326 20:30:35 -- spdk/autotest.sh@124 -- # uname -s 00:02:40.326 20:30:35 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:40.326 20:30:35 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:40.326 20:30:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:40.326 20:30:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:40.326 20:30:35 -- common/autotest_common.sh@10 -- # set +x 00:02:40.326 ************************************ 00:02:40.326 START TEST setup.sh 00:02:40.326 ************************************ 00:02:40.326 20:30:35 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:40.326 * Looking for test storage... 00:02:40.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:40.326 20:30:35 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:40.326 20:30:35 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:40.326 20:30:35 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:40.326 20:30:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:40.326 20:30:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:40.326 20:30:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:40.326 ************************************ 00:02:40.326 START TEST acl 00:02:40.326 ************************************ 00:02:40.326 20:30:35 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:40.584 * Looking for test storage... 00:02:40.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:40.584 20:30:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:40.584 20:30:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:40.584 20:30:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:40.584 20:30:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:40.584 20:30:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:40.584 20:30:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:40.584 20:30:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:40.584 20:30:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:40.584 20:30:35 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.958 20:30:37 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:41.958 20:30:37 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:41.958 20:30:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.958 20:30:37 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:41.958 20:30:37 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.958 20:30:37 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:42.891 Hugepages 00:02:42.891 node hugesize free / total 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:42.891 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.892 00:02:42.892 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:42.892 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.150 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:43.151 20:30:38 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:43.151 20:30:38 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:43.151 20:30:38 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:43.151 20:30:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:43.151 ************************************ 00:02:43.151 START TEST denied 00:02:43.151 ************************************ 00:02:43.151 20:30:38 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:43.151 20:30:38 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:43.151 20:30:38 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:43.151 20:30:38 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:43.151 20:30:38 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.151 20:30:38 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:44.524 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.524 20:30:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.054 00:02:47.054 real 0m3.735s 00:02:47.054 user 0m1.037s 00:02:47.054 sys 0m1.775s 00:02:47.054 20:30:42 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:47.054 20:30:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:47.054 ************************************ 00:02:47.055 END TEST denied 00:02:47.055 ************************************ 00:02:47.055 20:30:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:47.055 20:30:42 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:47.055 20:30:42 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:47.055 20:30:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:47.055 ************************************ 00:02:47.055 START TEST allowed 00:02:47.055 ************************************ 00:02:47.055 20:30:42 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:47.055 20:30:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:47.055 20:30:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:47.055 20:30:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.055 20:30:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:47.055 20:30:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.610 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:49.610 20:30:44 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:49.610 20:30:44 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:49.610 20:30:44 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:49.610 20:30:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.610 20:30:44 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.991 00:02:50.991 real 0m3.855s 00:02:50.991 user 0m1.026s 00:02:50.991 sys 0m1.649s 00:02:50.991 20:30:46 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:50.991 20:30:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:50.991 ************************************ 00:02:50.991 END TEST allowed 00:02:50.991 ************************************ 00:02:50.991 00:02:50.991 real 0m10.402s 00:02:50.991 user 0m3.204s 00:02:50.991 sys 0m5.164s 00:02:50.991 20:30:46 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:50.991 20:30:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:50.991 ************************************ 00:02:50.991 END TEST acl 00:02:50.991 ************************************ 00:02:50.991 20:30:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:50.991 20:30:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:50.991 20:30:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:50.991 20:30:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.991 ************************************ 00:02:50.991 START TEST hugepages 00:02:50.991 ************************************ 00:02:50.991 20:30:46 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:50.991 * Looking for test storage... 00:02:50.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43847420 kB' 'MemAvailable: 47325988 kB' 'Buffers: 2704 kB' 'Cached: 10195048 kB' 'SwapCached: 0 kB' 'Active: 7182372 kB' 'Inactive: 3493852 kB' 'Active(anon): 6793468 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481796 kB' 'Mapped: 174028 kB' 'Shmem: 6314996 kB' 'KReclaimable: 179304 kB' 'Slab: 549012 kB' 'SReclaimable: 179304 kB' 'SUnreclaim: 369708 kB' 'KernelStack: 13056 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 7949272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196472 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.991 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:50.992 20:30:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:50.992 20:30:46 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:50.992 20:30:46 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:50.992 20:30:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:50.992 ************************************ 00:02:50.992 START TEST default_setup 00:02:50.992 ************************************ 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.992 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.993 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:50.993 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:50.993 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:50.993 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:50.993 20:30:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:50.993 20:30:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.993 20:30:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.926 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:52.184 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:52.184 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:52.184 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:52.184 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:52.184 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:52.184 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:52.184 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:52.184 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:53.120 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45942916 kB' 'MemAvailable: 49421484 kB' 'Buffers: 2704 kB' 'Cached: 10195128 kB' 'SwapCached: 0 kB' 'Active: 7197212 kB' 'Inactive: 3493852 kB' 'Active(anon): 6808308 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496420 kB' 'Mapped: 173700 kB' 'Shmem: 6315076 kB' 'KReclaimable: 179304 kB' 'Slab: 548244 kB' 'SReclaimable: 179304 kB' 'SUnreclaim: 368940 kB' 'KernelStack: 12672 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7964548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.120 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939096 kB' 'MemAvailable: 49417664 kB' 'Buffers: 2704 kB' 'Cached: 10195136 kB' 'SwapCached: 0 kB' 'Active: 7199348 kB' 'Inactive: 3493852 kB' 'Active(anon): 6810444 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498584 kB' 'Mapped: 173984 kB' 'Shmem: 6315084 kB' 'KReclaimable: 179304 kB' 'Slab: 548240 kB' 'SReclaimable: 179304 kB' 'SUnreclaim: 368936 kB' 'KernelStack: 12736 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7966560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196504 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.121 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.122 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939132 kB' 'MemAvailable: 49417700 kB' 'Buffers: 2704 kB' 'Cached: 10195136 kB' 'SwapCached: 0 kB' 'Active: 7194872 kB' 'Inactive: 3493852 kB' 'Active(anon): 6805968 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494144 kB' 'Mapped: 174044 kB' 'Shmem: 6315084 kB' 'KReclaimable: 179304 kB' 'Slab: 548340 kB' 'SReclaimable: 179304 kB' 'SUnreclaim: 369036 kB' 'KernelStack: 12784 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7962608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.123 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.124 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.125 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.125 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.125 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.125 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.125 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.384 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.385 nr_hugepages=1024 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.385 resv_hugepages=0 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.385 surplus_hugepages=0 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.385 anon_hugepages=0 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.385 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45931852 kB' 'MemAvailable: 49410420 kB' 'Buffers: 2704 kB' 'Cached: 10195176 kB' 'SwapCached: 0 kB' 'Active: 7198984 kB' 'Inactive: 3493852 kB' 'Active(anon): 6810080 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498188 kB' 'Mapped: 173628 kB' 'Shmem: 6315124 kB' 'KReclaimable: 179304 kB' 'Slab: 548340 kB' 'SReclaimable: 179304 kB' 'SUnreclaim: 369036 kB' 'KernelStack: 12736 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7966604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.386 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.387 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20651064 kB' 'MemUsed: 12225876 kB' 'SwapCached: 0 kB' 'Active: 5843268 kB' 'Inactive: 3248752 kB' 'Active(anon): 5632532 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742316 kB' 'Mapped: 138496 kB' 'AnonPages: 352940 kB' 'Shmem: 5282828 kB' 'KernelStack: 8376 kB' 'PageTables: 5372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115968 kB' 'Slab: 348632 kB' 'SReclaimable: 115968 kB' 'SUnreclaim: 232664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.388 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:53.389 node0=1024 expecting 1024 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:53.389 00:02:53.389 real 0m2.334s 00:02:53.389 user 0m0.638s 00:02:53.389 sys 0m0.825s 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:53.389 20:30:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:53.389 ************************************ 00:02:53.389 END TEST default_setup 00:02:53.389 ************************************ 00:02:53.389 20:30:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:53.389 20:30:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:53.389 20:30:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:53.389 20:30:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.389 ************************************ 00:02:53.389 START TEST per_node_1G_alloc 00:02:53.389 ************************************ 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:53.389 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.390 20:30:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.764 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:54.764 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.764 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:54.764 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:54.764 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:54.764 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:54.764 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:54.764 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:54.764 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:54.764 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:54.764 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:54.764 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:54.764 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:54.764 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:54.764 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:54.764 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:54.764 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:54.764 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:54.764 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:54.764 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:54.764 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.764 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45951688 kB' 'MemAvailable: 49430260 kB' 'Buffers: 2704 kB' 'Cached: 10195244 kB' 'SwapCached: 0 kB' 'Active: 7195012 kB' 'Inactive: 3493852 kB' 'Active(anon): 6806108 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494208 kB' 'Mapped: 173336 kB' 'Shmem: 6315192 kB' 'KReclaimable: 179312 kB' 'Slab: 548524 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369212 kB' 'KernelStack: 12768 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.765 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45951960 kB' 'MemAvailable: 49430532 kB' 'Buffers: 2704 kB' 'Cached: 10195248 kB' 'SwapCached: 0 kB' 'Active: 7194420 kB' 'Inactive: 3493852 kB' 'Active(anon): 6805516 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494020 kB' 'Mapped: 173276 kB' 'Shmem: 6315196 kB' 'KReclaimable: 179312 kB' 'Slab: 548508 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369196 kB' 'KernelStack: 12720 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.766 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45951144 kB' 'MemAvailable: 49429716 kB' 'Buffers: 2704 kB' 'Cached: 10195272 kB' 'SwapCached: 0 kB' 'Active: 7195096 kB' 'Inactive: 3493852 kB' 'Active(anon): 6806192 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494636 kB' 'Mapped: 173200 kB' 'Shmem: 6315220 kB' 'KReclaimable: 179312 kB' 'Slab: 548516 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369204 kB' 'KernelStack: 12832 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.767 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.768 nr_hugepages=1024 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.768 resv_hugepages=0 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.768 surplus_hugepages=0 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.768 anon_hugepages=0 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45950892 kB' 'MemAvailable: 49429464 kB' 'Buffers: 2704 kB' 'Cached: 10195292 kB' 'SwapCached: 0 kB' 'Active: 7194928 kB' 'Inactive: 3493852 kB' 'Active(anon): 6806024 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494444 kB' 'Mapped: 173200 kB' 'Shmem: 6315240 kB' 'KReclaimable: 179312 kB' 'Slab: 548516 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369204 kB' 'KernelStack: 12848 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.768 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21715888 kB' 'MemUsed: 11161052 kB' 'SwapCached: 0 kB' 'Active: 5838740 kB' 'Inactive: 3248752 kB' 'Active(anon): 5628004 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742440 kB' 'Mapped: 137740 kB' 'AnonPages: 348476 kB' 'Shmem: 5282952 kB' 'KernelStack: 8392 kB' 'PageTables: 5352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115976 kB' 'Slab: 348632 kB' 'SReclaimable: 115976 kB' 'SUnreclaim: 232656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.769 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24235272 kB' 'MemUsed: 3429516 kB' 'SwapCached: 0 kB' 'Active: 1356404 kB' 'Inactive: 245100 kB' 'Active(anon): 1178236 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245100 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1455584 kB' 'Mapped: 35460 kB' 'AnonPages: 146180 kB' 'Shmem: 1032316 kB' 'KernelStack: 4456 kB' 'PageTables: 2652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63336 kB' 'Slab: 199884 kB' 'SReclaimable: 63336 kB' 'SUnreclaim: 136548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.770 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:54.771 node0=512 expecting 512 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:54.771 node1=512 expecting 512 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:54.771 00:02:54.771 real 0m1.456s 00:02:54.771 user 0m0.601s 00:02:54.771 sys 0m0.819s 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:54.771 20:30:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:54.771 ************************************ 00:02:54.771 END TEST per_node_1G_alloc 00:02:54.771 ************************************ 00:02:54.771 20:30:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:54.771 20:30:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:54.771 20:30:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:54.771 20:30:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:54.771 ************************************ 00:02:54.771 START TEST even_2G_alloc 00:02:54.771 ************************************ 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.771 20:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.147 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.147 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.147 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.147 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.147 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.147 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.147 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.147 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.147 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.147 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.147 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.147 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.147 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.147 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.147 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.147 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.147 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45942756 kB' 'MemAvailable: 49421328 kB' 'Buffers: 2704 kB' 'Cached: 10195388 kB' 'SwapCached: 0 kB' 'Active: 7195800 kB' 'Inactive: 3493852 kB' 'Active(anon): 6806896 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494884 kB' 'Mapped: 173332 kB' 'Shmem: 6315336 kB' 'KReclaimable: 179312 kB' 'Slab: 548436 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369124 kB' 'KernelStack: 12864 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45942912 kB' 'MemAvailable: 49421484 kB' 'Buffers: 2704 kB' 'Cached: 10195392 kB' 'SwapCached: 0 kB' 'Active: 7196124 kB' 'Inactive: 3493852 kB' 'Active(anon): 6807220 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495228 kB' 'Mapped: 173296 kB' 'Shmem: 6315340 kB' 'KReclaimable: 179312 kB' 'Slab: 548436 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369124 kB' 'KernelStack: 12912 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45946312 kB' 'MemAvailable: 49424884 kB' 'Buffers: 2704 kB' 'Cached: 10195392 kB' 'SwapCached: 0 kB' 'Active: 7196216 kB' 'Inactive: 3493852 kB' 'Active(anon): 6807312 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495304 kB' 'Mapped: 173296 kB' 'Shmem: 6315340 kB' 'KReclaimable: 179312 kB' 'Slab: 548428 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369116 kB' 'KernelStack: 12880 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.152 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.153 nr_hugepages=1024 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.153 resv_hugepages=0 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.153 surplus_hugepages=0 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.153 anon_hugepages=0 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.153 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45945892 kB' 'MemAvailable: 49424464 kB' 'Buffers: 2704 kB' 'Cached: 10195432 kB' 'SwapCached: 0 kB' 'Active: 7195760 kB' 'Inactive: 3493852 kB' 'Active(anon): 6806856 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494716 kB' 'Mapped: 173220 kB' 'Shmem: 6315380 kB' 'KReclaimable: 179312 kB' 'Slab: 548404 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 369092 kB' 'KernelStack: 12848 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7960872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.154 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.155 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21709184 kB' 'MemUsed: 11167756 kB' 'SwapCached: 0 kB' 'Active: 5839020 kB' 'Inactive: 3248752 kB' 'Active(anon): 5628284 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742564 kB' 'Mapped: 137760 kB' 'AnonPages: 348368 kB' 'Shmem: 5283076 kB' 'KernelStack: 8408 kB' 'PageTables: 5352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115976 kB' 'Slab: 348556 kB' 'SReclaimable: 115976 kB' 'SUnreclaim: 232580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.156 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24236456 kB' 'MemUsed: 3428332 kB' 'SwapCached: 0 kB' 'Active: 1356864 kB' 'Inactive: 245100 kB' 'Active(anon): 1178696 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245100 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1455592 kB' 'Mapped: 35460 kB' 'AnonPages: 146412 kB' 'Shmem: 1032324 kB' 'KernelStack: 4472 kB' 'PageTables: 2700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63336 kB' 'Slab: 199848 kB' 'SReclaimable: 63336 kB' 'SUnreclaim: 136512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.157 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.158 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.417 node0=512 expecting 512 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:56.417 node1=512 expecting 512 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:56.417 00:02:56.417 real 0m1.429s 00:02:56.417 user 0m0.593s 00:02:56.417 sys 0m0.796s 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:56.417 20:30:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.417 ************************************ 00:02:56.417 END TEST even_2G_alloc 00:02:56.417 ************************************ 00:02:56.417 20:30:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:56.417 20:30:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:56.418 20:30:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:56.418 20:30:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.418 ************************************ 00:02:56.418 START TEST odd_alloc 00:02:56.418 ************************************ 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.418 20:30:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:57.353 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:57.353 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.353 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:57.353 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:57.353 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:57.353 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:57.353 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:57.353 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:57.353 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:57.353 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:57.353 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:57.353 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:57.353 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:57.353 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:57.353 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:57.353 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:57.353 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.617 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45917488 kB' 'MemAvailable: 49396060 kB' 'Buffers: 2704 kB' 'Cached: 10195516 kB' 'SwapCached: 0 kB' 'Active: 7192944 kB' 'Inactive: 3493852 kB' 'Active(anon): 6804040 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491768 kB' 'Mapped: 172564 kB' 'Shmem: 6315464 kB' 'KReclaimable: 179312 kB' 'Slab: 548308 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368996 kB' 'KernelStack: 12784 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7947564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.618 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45919416 kB' 'MemAvailable: 49397988 kB' 'Buffers: 2704 kB' 'Cached: 10195516 kB' 'SwapCached: 0 kB' 'Active: 7194320 kB' 'Inactive: 3493852 kB' 'Active(anon): 6805416 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493244 kB' 'Mapped: 172584 kB' 'Shmem: 6315464 kB' 'KReclaimable: 179312 kB' 'Slab: 548280 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368968 kB' 'KernelStack: 12848 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7949944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.619 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.620 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45919728 kB' 'MemAvailable: 49398300 kB' 'Buffers: 2704 kB' 'Cached: 10195524 kB' 'SwapCached: 0 kB' 'Active: 7193604 kB' 'Inactive: 3493852 kB' 'Active(anon): 6804700 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492524 kB' 'Mapped: 172500 kB' 'Shmem: 6315472 kB' 'KReclaimable: 179312 kB' 'Slab: 548280 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368968 kB' 'KernelStack: 13088 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7949964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.621 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.622 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:57.623 nr_hugepages=1025 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.623 resv_hugepages=0 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.623 surplus_hugepages=0 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.623 anon_hugepages=0 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45919236 kB' 'MemAvailable: 49397808 kB' 'Buffers: 2704 kB' 'Cached: 10195560 kB' 'SwapCached: 0 kB' 'Active: 7193496 kB' 'Inactive: 3493852 kB' 'Active(anon): 6804592 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492712 kB' 'Mapped: 172428 kB' 'Shmem: 6315508 kB' 'KReclaimable: 179312 kB' 'Slab: 548260 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368948 kB' 'KernelStack: 13168 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7948620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196804 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.623 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.624 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21690364 kB' 'MemUsed: 11186576 kB' 'SwapCached: 0 kB' 'Active: 5837124 kB' 'Inactive: 3248752 kB' 'Active(anon): 5626388 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742688 kB' 'Mapped: 137000 kB' 'AnonPages: 346372 kB' 'Shmem: 5283200 kB' 'KernelStack: 8392 kB' 'PageTables: 5232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115976 kB' 'Slab: 348440 kB' 'SReclaimable: 115976 kB' 'SUnreclaim: 232464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.625 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:57.626 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24226944 kB' 'MemUsed: 3437844 kB' 'SwapCached: 0 kB' 'Active: 1357496 kB' 'Inactive: 245100 kB' 'Active(anon): 1179328 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245100 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1455596 kB' 'Mapped: 35420 kB' 'AnonPages: 147004 kB' 'Shmem: 1032328 kB' 'KernelStack: 4792 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63336 kB' 'Slab: 199820 kB' 'SReclaimable: 63336 kB' 'SUnreclaim: 136484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:57.628 node0=512 expecting 513 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:57.628 node1=513 expecting 512 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:57.628 00:02:57.628 real 0m1.376s 00:02:57.628 user 0m0.562s 00:02:57.628 sys 0m0.775s 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:57.628 20:30:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:57.628 ************************************ 00:02:57.628 END TEST odd_alloc 00:02:57.628 ************************************ 00:02:57.628 20:30:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:57.628 20:30:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:57.628 20:30:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:57.628 20:30:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:57.628 ************************************ 00:02:57.628 START TEST custom_alloc 00:02:57.628 ************************************ 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:57.628 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.887 20:30:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.820 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.820 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:58.820 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:58.820 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:58.820 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:58.820 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:58.820 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:58.820 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:58.820 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:58.820 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:58.820 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:58.820 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:58.820 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:58.820 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:58.820 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:58.820 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:58.820 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44863932 kB' 'MemAvailable: 48342504 kB' 'Buffers: 2704 kB' 'Cached: 10195652 kB' 'SwapCached: 0 kB' 'Active: 7193664 kB' 'Inactive: 3493852 kB' 'Active(anon): 6804760 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492796 kB' 'Mapped: 172220 kB' 'Shmem: 6315600 kB' 'KReclaimable: 179312 kB' 'Slab: 548200 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368888 kB' 'KernelStack: 12720 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7916496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44865188 kB' 'MemAvailable: 48343760 kB' 'Buffers: 2704 kB' 'Cached: 10195656 kB' 'SwapCached: 0 kB' 'Active: 7197380 kB' 'Inactive: 3493852 kB' 'Active(anon): 6808476 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496124 kB' 'Mapped: 171952 kB' 'Shmem: 6315604 kB' 'KReclaimable: 179312 kB' 'Slab: 548176 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368864 kB' 'KernelStack: 12752 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7919564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196488 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44865712 kB' 'MemAvailable: 48344284 kB' 'Buffers: 2704 kB' 'Cached: 10195672 kB' 'SwapCached: 0 kB' 'Active: 7192448 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803544 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490648 kB' 'Mapped: 171440 kB' 'Shmem: 6315620 kB' 'KReclaimable: 179312 kB' 'Slab: 548168 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368856 kB' 'KernelStack: 12784 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7918648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:59.090 nr_hugepages=1536 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.090 resv_hugepages=0 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.090 surplus_hugepages=0 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.090 anon_hugepages=0 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44873524 kB' 'MemAvailable: 48352096 kB' 'Buffers: 2704 kB' 'Cached: 10195692 kB' 'SwapCached: 0 kB' 'Active: 7191684 kB' 'Inactive: 3493852 kB' 'Active(anon): 6802780 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490392 kB' 'Mapped: 171440 kB' 'Shmem: 6315640 kB' 'KReclaimable: 179312 kB' 'Slab: 548168 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368856 kB' 'KernelStack: 12768 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7913484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21688624 kB' 'MemUsed: 11188316 kB' 'SwapCached: 0 kB' 'Active: 5836228 kB' 'Inactive: 3248752 kB' 'Active(anon): 5625492 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742792 kB' 'Mapped: 136132 kB' 'AnonPages: 345412 kB' 'Shmem: 5283304 kB' 'KernelStack: 8360 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115976 kB' 'Slab: 348440 kB' 'SReclaimable: 115976 kB' 'SUnreclaim: 232464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23184900 kB' 'MemUsed: 4479888 kB' 'SwapCached: 0 kB' 'Active: 1355580 kB' 'Inactive: 245100 kB' 'Active(anon): 1177412 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245100 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1455608 kB' 'Mapped: 35308 kB' 'AnonPages: 145100 kB' 'Shmem: 1032340 kB' 'KernelStack: 4392 kB' 'PageTables: 2352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63336 kB' 'Slab: 199728 kB' 'SReclaimable: 63336 kB' 'SUnreclaim: 136392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:59.095 node0=512 expecting 512 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:59.095 node1=1024 expecting 1024 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:59.095 00:02:59.095 real 0m1.444s 00:02:59.095 user 0m0.643s 00:02:59.095 sys 0m0.764s 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:59.095 20:30:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:59.095 ************************************ 00:02:59.095 END TEST custom_alloc 00:02:59.095 ************************************ 00:02:59.095 20:30:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:59.095 20:30:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:59.095 20:30:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:59.095 20:30:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.354 ************************************ 00:02:59.354 START TEST no_shrink_alloc 00:02:59.354 ************************************ 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.354 20:30:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.288 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:00.288 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.288 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:00.288 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:00.288 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:00.288 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:00.288 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:00.288 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:00.288 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:00.288 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:00.288 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:00.288 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:00.288 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:00.288 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:00.288 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:00.288 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:00.288 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.551 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45925112 kB' 'MemAvailable: 49403684 kB' 'Buffers: 2704 kB' 'Cached: 10195772 kB' 'SwapCached: 0 kB' 'Active: 7192420 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803516 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490996 kB' 'Mapped: 171548 kB' 'Shmem: 6315720 kB' 'KReclaimable: 179312 kB' 'Slab: 548208 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368896 kB' 'KernelStack: 12768 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.552 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45932964 kB' 'MemAvailable: 49411536 kB' 'Buffers: 2704 kB' 'Cached: 10195772 kB' 'SwapCached: 0 kB' 'Active: 7192244 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803340 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490840 kB' 'Mapped: 171532 kB' 'Shmem: 6315720 kB' 'KReclaimable: 179312 kB' 'Slab: 548208 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368896 kB' 'KernelStack: 12784 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.553 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.554 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45933228 kB' 'MemAvailable: 49411800 kB' 'Buffers: 2704 kB' 'Cached: 10195772 kB' 'SwapCached: 0 kB' 'Active: 7192124 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803220 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490688 kB' 'Mapped: 171452 kB' 'Shmem: 6315720 kB' 'KReclaimable: 179312 kB' 'Slab: 548192 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368880 kB' 'KernelStack: 12784 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.555 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.556 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:00.557 nr_hugepages=1024 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.557 resv_hugepages=0 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.557 surplus_hugepages=0 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.557 anon_hugepages=0 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.557 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45933768 kB' 'MemAvailable: 49412340 kB' 'Buffers: 2704 kB' 'Cached: 10195816 kB' 'SwapCached: 0 kB' 'Active: 7192224 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803320 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490728 kB' 'Mapped: 171452 kB' 'Shmem: 6315764 kB' 'KReclaimable: 179312 kB' 'Slab: 548192 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368880 kB' 'KernelStack: 12800 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.558 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.559 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20653564 kB' 'MemUsed: 12223376 kB' 'SwapCached: 0 kB' 'Active: 5836104 kB' 'Inactive: 3248752 kB' 'Active(anon): 5625368 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742912 kB' 'Mapped: 136144 kB' 'AnonPages: 345076 kB' 'Shmem: 5283424 kB' 'KernelStack: 8392 kB' 'PageTables: 5212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115976 kB' 'Slab: 348508 kB' 'SReclaimable: 115976 kB' 'SUnreclaim: 232532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.560 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:00.561 node0=1024 expecting 1024 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.561 20:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.942 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.942 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.942 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.942 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.942 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.942 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.942 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.942 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.942 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.942 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:01.942 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:01.942 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:01.942 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:01.942 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:01.942 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:01.942 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:01.942 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:01.942 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45937612 kB' 'MemAvailable: 49416184 kB' 'Buffers: 2704 kB' 'Cached: 10195888 kB' 'SwapCached: 0 kB' 'Active: 7192428 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803524 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490808 kB' 'Mapped: 171484 kB' 'Shmem: 6315836 kB' 'KReclaimable: 179312 kB' 'Slab: 547984 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368672 kB' 'KernelStack: 12800 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.942 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.943 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939996 kB' 'MemAvailable: 49418568 kB' 'Buffers: 2704 kB' 'Cached: 10195892 kB' 'SwapCached: 0 kB' 'Active: 7192452 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803548 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490840 kB' 'Mapped: 171460 kB' 'Shmem: 6315840 kB' 'KReclaimable: 179312 kB' 'Slab: 547984 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368672 kB' 'KernelStack: 12816 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.944 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.945 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45940224 kB' 'MemAvailable: 49418796 kB' 'Buffers: 2704 kB' 'Cached: 10195896 kB' 'SwapCached: 0 kB' 'Active: 7192164 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803260 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490540 kB' 'Mapped: 171460 kB' 'Shmem: 6315844 kB' 'KReclaimable: 179312 kB' 'Slab: 548064 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368752 kB' 'KernelStack: 12816 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.946 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.947 nr_hugepages=1024 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.947 resv_hugepages=0 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.947 surplus_hugepages=0 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.947 anon_hugepages=0 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.947 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45941648 kB' 'MemAvailable: 49420220 kB' 'Buffers: 2704 kB' 'Cached: 10195932 kB' 'SwapCached: 0 kB' 'Active: 7192464 kB' 'Inactive: 3493852 kB' 'Active(anon): 6803560 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490816 kB' 'Mapped: 171460 kB' 'Shmem: 6315880 kB' 'KReclaimable: 179312 kB' 'Slab: 548064 kB' 'SReclaimable: 179312 kB' 'SUnreclaim: 368752 kB' 'KernelStack: 12816 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7913852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2010716 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 50331648 kB' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.948 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20648672 kB' 'MemUsed: 12228268 kB' 'SwapCached: 0 kB' 'Active: 5837016 kB' 'Inactive: 3248752 kB' 'Active(anon): 5626280 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248752 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8743024 kB' 'Mapped: 136152 kB' 'AnonPages: 345872 kB' 'Shmem: 5283536 kB' 'KernelStack: 8408 kB' 'PageTables: 5256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115976 kB' 'Slab: 348500 kB' 'SReclaimable: 115976 kB' 'SUnreclaim: 232524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.949 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.950 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:01.951 node0=1024 expecting 1024 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:01.951 00:03:01.951 real 0m2.769s 00:03:01.951 user 0m1.157s 00:03:01.951 sys 0m1.534s 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:01.951 20:30:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:01.951 ************************************ 00:03:01.951 END TEST no_shrink_alloc 00:03:01.951 ************************************ 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:01.951 20:30:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:01.951 00:03:01.951 real 0m11.168s 00:03:01.951 user 0m4.347s 00:03:01.951 sys 0m5.738s 00:03:01.951 20:30:57 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:01.951 20:30:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.951 ************************************ 00:03:01.951 END TEST hugepages 00:03:01.951 ************************************ 00:03:01.951 20:30:57 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:01.951 20:30:57 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:01.951 20:30:57 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:01.951 20:30:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.208 ************************************ 00:03:02.208 START TEST driver 00:03:02.208 ************************************ 00:03:02.208 20:30:57 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:02.208 * Looking for test storage... 00:03:02.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.208 20:30:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:02.208 20:30:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.208 20:30:57 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.773 20:31:00 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:04.773 20:31:00 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:04.773 20:31:00 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:04.773 20:31:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:04.773 ************************************ 00:03:04.773 START TEST guess_driver 00:03:04.773 ************************************ 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:04.773 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:04.773 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:04.773 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:04.773 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:04.773 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:04.773 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:04.773 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:04.773 Looking for driver=vfio-pci 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.773 20:31:00 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.706 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.964 20:31:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:06.900 20:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:06.900 20:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:06.900 20:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:06.900 20:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:06.900 20:31:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:06.900 20:31:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.900 20:31:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.430 00:03:09.430 real 0m4.697s 00:03:09.430 user 0m1.119s 00:03:09.430 sys 0m1.723s 00:03:09.430 20:31:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:09.431 20:31:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:09.431 ************************************ 00:03:09.431 END TEST guess_driver 00:03:09.431 ************************************ 00:03:09.431 00:03:09.431 real 0m7.321s 00:03:09.431 user 0m1.702s 00:03:09.431 sys 0m2.772s 00:03:09.431 20:31:04 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:09.431 20:31:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:09.431 ************************************ 00:03:09.431 END TEST driver 00:03:09.431 ************************************ 00:03:09.431 20:31:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:09.431 20:31:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:09.431 20:31:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:09.431 20:31:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.431 ************************************ 00:03:09.431 START TEST devices 00:03:09.431 ************************************ 00:03:09.431 20:31:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:09.431 * Looking for test storage... 00:03:09.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.431 20:31:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:09.431 20:31:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:09.431 20:31:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.431 20:31:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:10.805 20:31:06 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:10.805 No valid GPT data, bailing 00:03:10.805 20:31:06 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:10.805 20:31:06 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:10.805 20:31:06 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:10.805 20:31:06 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:10.805 20:31:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:10.805 ************************************ 00:03:10.805 START TEST nvme_mount 00:03:10.805 ************************************ 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:10.805 20:31:06 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:12.179 Creating new GPT entries in memory. 00:03:12.179 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:12.179 other utilities. 00:03:12.179 20:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:12.179 20:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:12.179 20:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:12.179 20:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:12.179 20:31:07 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:13.114 Creating new GPT entries in memory. 00:03:13.114 The operation has completed successfully. 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1457979 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.114 20:31:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.047 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.048 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:14.306 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:14.306 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:14.565 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:14.565 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:14.565 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:14.565 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:14.565 20:31:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:14.565 20:31:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:14.565 20:31:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.565 20:31:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:14.565 20:31:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.565 20:31:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.939 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.939 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:15.939 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:15.939 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.939 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.939 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.939 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.940 20:31:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:17.314 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:17.314 00:03:17.314 real 0m6.288s 00:03:17.314 user 0m1.476s 00:03:17.314 sys 0m2.373s 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.314 20:31:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:17.314 ************************************ 00:03:17.314 END TEST nvme_mount 00:03:17.314 ************************************ 00:03:17.314 20:31:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:17.314 20:31:12 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.314 20:31:12 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.314 20:31:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:17.314 ************************************ 00:03:17.314 START TEST dm_mount 00:03:17.314 ************************************ 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:17.314 20:31:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:18.247 Creating new GPT entries in memory. 00:03:18.247 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:18.247 other utilities. 00:03:18.247 20:31:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:18.247 20:31:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:18.247 20:31:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:18.247 20:31:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:18.247 20:31:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:19.178 Creating new GPT entries in memory. 00:03:19.178 The operation has completed successfully. 00:03:19.178 20:31:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:19.178 20:31:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.178 20:31:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:19.178 20:31:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:19.178 20:31:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:20.551 The operation has completed successfully. 00:03:20.551 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1460367 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.552 20:31:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:21.486 20:31:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.744 20:31:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.724 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:22.982 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:22.982 00:03:22.982 real 0m5.745s 00:03:22.982 user 0m0.974s 00:03:22.982 sys 0m1.598s 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.982 20:31:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:22.982 ************************************ 00:03:22.982 END TEST dm_mount 00:03:22.982 ************************************ 00:03:22.982 20:31:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:22.982 20:31:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:22.982 20:31:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.982 20:31:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:22.982 20:31:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:22.982 20:31:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:22.982 20:31:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:23.240 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:23.240 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:23.240 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:23.240 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:23.240 20:31:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:23.240 20:31:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:23.240 20:31:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:23.240 20:31:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:23.240 20:31:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:23.240 20:31:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:23.240 20:31:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:23.240 00:03:23.240 real 0m13.860s 00:03:23.240 user 0m3.048s 00:03:23.240 sys 0m4.957s 00:03:23.240 20:31:18 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.240 20:31:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:23.240 ************************************ 00:03:23.240 END TEST devices 00:03:23.240 ************************************ 00:03:23.240 00:03:23.240 real 0m42.983s 00:03:23.240 user 0m12.401s 00:03:23.240 sys 0m18.781s 00:03:23.240 20:31:18 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.240 20:31:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:23.240 ************************************ 00:03:23.240 END TEST setup.sh 00:03:23.240 ************************************ 00:03:23.240 20:31:18 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:24.613 Hugepages 00:03:24.613 node hugesize free / total 00:03:24.613 node0 1048576kB 0 / 0 00:03:24.613 node0 2048kB 2048 / 2048 00:03:24.613 node1 1048576kB 0 / 0 00:03:24.613 node1 2048kB 0 / 0 00:03:24.613 00:03:24.613 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.613 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:24.613 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:24.613 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:24.613 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:24.613 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:24.613 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:24.613 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:24.613 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:24.613 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:24.613 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:24.613 20:31:19 -- spdk/autotest.sh@130 -- # uname -s 00:03:24.613 20:31:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:24.613 20:31:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:24.613 20:31:19 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.547 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:25.547 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:25.547 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:25.547 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:25.804 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:25.804 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:25.804 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:25.804 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:25.804 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:26.739 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.739 20:31:22 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:28.110 20:31:23 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:28.110 20:31:23 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:28.110 20:31:23 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:28.110 20:31:23 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:28.110 20:31:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:28.110 20:31:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:28.110 20:31:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:28.110 20:31:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:28.110 20:31:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:28.110 20:31:23 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:28.110 20:31:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:28.110 20:31:23 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.042 Waiting for block devices as requested 00:03:29.042 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:29.042 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:29.300 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:29.300 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:29.300 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:29.557 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:29.557 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:29.557 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:29.557 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:29.815 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:29.815 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:29.815 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:29.815 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:30.073 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:30.073 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:30.073 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:30.330 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:30.330 20:31:25 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:30.330 20:31:25 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:30.330 20:31:25 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:30.330 20:31:25 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:30.330 20:31:25 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:30.330 20:31:25 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:30.330 20:31:25 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:30.330 20:31:25 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:30.330 20:31:25 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:30.330 20:31:25 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:30.330 20:31:25 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:30.330 20:31:25 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:30.330 20:31:25 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:30.330 20:31:25 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:30.330 20:31:25 -- common/autotest_common.sh@1557 -- # continue 00:03:30.330 20:31:25 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:30.330 20:31:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:30.330 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:03:30.330 20:31:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:30.330 20:31:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.330 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:03:30.330 20:31:25 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.703 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:31.703 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:31.703 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:31.703 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:31.703 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:31.703 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:31.703 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:31.703 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:31.703 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:31.703 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:31.703 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:31.703 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:31.704 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:31.704 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:31.704 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:31.704 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:32.639 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:32.639 20:31:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:32.639 20:31:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:32.639 20:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:32.639 20:31:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:32.639 20:31:28 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:32.639 20:31:28 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:32.639 20:31:28 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:32.639 20:31:28 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:32.639 20:31:28 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:32.639 20:31:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:32.639 20:31:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:32.639 20:31:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.639 20:31:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.639 20:31:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:32.897 20:31:28 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:32.897 20:31:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:32.897 20:31:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:32.897 20:31:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:32.897 20:31:28 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:32.897 20:31:28 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:32.897 20:31:28 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:32.897 20:31:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:32.897 20:31:28 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:32.897 20:31:28 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1465560 00:03:32.897 20:31:28 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.897 20:31:28 -- common/autotest_common.sh@1598 -- # waitforlisten 1465560 00:03:32.897 20:31:28 -- common/autotest_common.sh@831 -- # '[' -z 1465560 ']' 00:03:32.897 20:31:28 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.897 20:31:28 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:32.897 20:31:28 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.897 20:31:28 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:32.897 20:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:32.897 [2024-07-24 20:31:28.280891] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:03:32.897 [2024-07-24 20:31:28.280982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465560 ] 00:03:32.897 EAL: No free 2048 kB hugepages reported on node 1 00:03:32.897 [2024-07-24 20:31:28.342083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.897 [2024-07-24 20:31:28.457823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.831 20:31:29 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:33.831 20:31:29 -- common/autotest_common.sh@864 -- # return 0 00:03:33.831 20:31:29 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:33.831 20:31:29 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:33.831 20:31:29 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:37.110 nvme0n1 00:03:37.110 20:31:32 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:37.110 [2024-07-24 20:31:32.515923] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:37.110 [2024-07-24 20:31:32.515967] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:37.110 request: 00:03:37.110 { 00:03:37.110 "nvme_ctrlr_name": "nvme0", 00:03:37.110 "password": "test", 00:03:37.110 "method": "bdev_nvme_opal_revert", 00:03:37.110 "req_id": 1 00:03:37.110 } 00:03:37.110 Got JSON-RPC error response 00:03:37.110 response: 00:03:37.110 { 00:03:37.110 "code": -32603, 00:03:37.110 "message": "Internal error" 00:03:37.110 } 00:03:37.110 20:31:32 -- common/autotest_common.sh@1604 -- # true 00:03:37.110 20:31:32 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:37.110 20:31:32 -- common/autotest_common.sh@1608 -- # killprocess 1465560 00:03:37.110 20:31:32 -- common/autotest_common.sh@950 -- # '[' -z 1465560 ']' 00:03:37.110 20:31:32 -- common/autotest_common.sh@954 -- # kill -0 1465560 00:03:37.110 20:31:32 -- common/autotest_common.sh@955 -- # uname 00:03:37.110 20:31:32 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:37.111 20:31:32 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465560 00:03:37.111 20:31:32 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:37.111 20:31:32 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:37.111 20:31:32 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465560' 00:03:37.111 killing process with pid 1465560 00:03:37.111 20:31:32 -- common/autotest_common.sh@969 -- # kill 1465560 00:03:37.111 20:31:32 -- common/autotest_common.sh@974 -- # wait 1465560 00:03:39.007 20:31:34 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:39.007 20:31:34 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:39.007 20:31:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:39.008 20:31:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:39.008 20:31:34 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:39.008 20:31:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.008 20:31:34 -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 20:31:34 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:39.008 20:31:34 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.008 20:31:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.008 20:31:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.008 20:31:34 -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 ************************************ 00:03:39.008 START TEST env 00:03:39.008 ************************************ 00:03:39.008 20:31:34 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.008 * Looking for test storage... 00:03:39.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:39.008 20:31:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.008 20:31:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.008 20:31:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.008 20:31:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 ************************************ 00:03:39.008 START TEST env_memory 00:03:39.008 ************************************ 00:03:39.008 20:31:34 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.008 00:03:39.008 00:03:39.008 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.008 http://cunit.sourceforge.net/ 00:03:39.008 00:03:39.008 00:03:39.008 Suite: memory 00:03:39.008 Test: alloc and free memory map ...[2024-07-24 20:31:34.500569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:39.008 passed 00:03:39.008 Test: mem map translation ...[2024-07-24 20:31:34.521980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:39.008 [2024-07-24 20:31:34.522003] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:39.008 [2024-07-24 20:31:34.522062] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:39.008 [2024-07-24 20:31:34.522075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:39.008 passed 00:03:39.008 Test: mem map registration ...[2024-07-24 20:31:34.565406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:39.008 [2024-07-24 20:31:34.565428] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:39.266 passed 00:03:39.266 Test: mem map adjacent registrations ...passed 00:03:39.266 00:03:39.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.266 suites 1 1 n/a 0 0 00:03:39.266 tests 4 4 4 0 0 00:03:39.266 asserts 152 152 152 0 n/a 00:03:39.266 00:03:39.266 Elapsed time = 0.150 seconds 00:03:39.266 00:03:39.266 real 0m0.157s 00:03:39.266 user 0m0.151s 00:03:39.266 sys 0m0.005s 00:03:39.266 20:31:34 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.266 20:31:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:39.266 ************************************ 00:03:39.266 END TEST env_memory 00:03:39.266 ************************************ 00:03:39.266 20:31:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.266 20:31:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.266 20:31:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.267 20:31:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.267 ************************************ 00:03:39.267 START TEST env_vtophys 00:03:39.267 ************************************ 00:03:39.267 20:31:34 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.267 EAL: lib.eal log level changed from notice to debug 00:03:39.267 EAL: Detected lcore 0 as core 0 on socket 0 00:03:39.267 EAL: Detected lcore 1 as core 1 on socket 0 00:03:39.267 EAL: Detected lcore 2 as core 2 on socket 0 00:03:39.267 EAL: Detected lcore 3 as core 3 on socket 0 00:03:39.267 EAL: Detected lcore 4 as core 4 on socket 0 00:03:39.267 EAL: Detected lcore 5 as core 5 on socket 0 00:03:39.267 EAL: Detected lcore 6 as core 8 on socket 0 00:03:39.267 EAL: Detected lcore 7 as core 9 on socket 0 00:03:39.267 EAL: Detected lcore 8 as core 10 on socket 0 00:03:39.267 EAL: Detected lcore 9 as core 11 on socket 0 00:03:39.267 EAL: Detected lcore 10 as core 12 on socket 0 00:03:39.267 EAL: Detected lcore 11 as core 13 on socket 0 00:03:39.267 EAL: Detected lcore 12 as core 0 on socket 1 00:03:39.267 EAL: Detected lcore 13 as core 1 on socket 1 00:03:39.267 EAL: Detected lcore 14 as core 2 on socket 1 00:03:39.267 EAL: Detected lcore 15 as core 3 on socket 1 00:03:39.267 EAL: Detected lcore 16 as core 4 on socket 1 00:03:39.267 EAL: Detected lcore 17 as core 5 on socket 1 00:03:39.267 EAL: Detected lcore 18 as core 8 on socket 1 00:03:39.267 EAL: Detected lcore 19 as core 9 on socket 1 00:03:39.267 EAL: Detected lcore 20 as core 10 on socket 1 00:03:39.267 EAL: Detected lcore 21 as core 11 on socket 1 00:03:39.267 EAL: Detected lcore 22 as core 12 on socket 1 00:03:39.267 EAL: Detected lcore 23 as core 13 on socket 1 00:03:39.267 EAL: Detected lcore 24 as core 0 on socket 0 00:03:39.267 EAL: Detected lcore 25 as core 1 on socket 0 00:03:39.267 EAL: Detected lcore 26 as core 2 on socket 0 00:03:39.267 EAL: Detected lcore 27 as core 3 on socket 0 00:03:39.267 EAL: Detected lcore 28 as core 4 on socket 0 00:03:39.267 EAL: Detected lcore 29 as core 5 on socket 0 00:03:39.267 EAL: Detected lcore 30 as core 8 on socket 0 00:03:39.267 EAL: Detected lcore 31 as core 9 on socket 0 00:03:39.267 EAL: Detected lcore 32 as core 10 on socket 0 00:03:39.267 EAL: Detected lcore 33 as core 11 on socket 0 00:03:39.267 EAL: Detected lcore 34 as core 12 on socket 0 00:03:39.267 EAL: Detected lcore 35 as core 13 on socket 0 00:03:39.267 EAL: Detected lcore 36 as core 0 on socket 1 00:03:39.267 EAL: Detected lcore 37 as core 1 on socket 1 00:03:39.267 EAL: Detected lcore 38 as core 2 on socket 1 00:03:39.267 EAL: Detected lcore 39 as core 3 on socket 1 00:03:39.267 EAL: Detected lcore 40 as core 4 on socket 1 00:03:39.267 EAL: Detected lcore 41 as core 5 on socket 1 00:03:39.267 EAL: Detected lcore 42 as core 8 on socket 1 00:03:39.267 EAL: Detected lcore 43 as core 9 on socket 1 00:03:39.267 EAL: Detected lcore 44 as core 10 on socket 1 00:03:39.267 EAL: Detected lcore 45 as core 11 on socket 1 00:03:39.267 EAL: Detected lcore 46 as core 12 on socket 1 00:03:39.267 EAL: Detected lcore 47 as core 13 on socket 1 00:03:39.267 EAL: Maximum logical cores by configuration: 128 00:03:39.267 EAL: Detected CPU lcores: 48 00:03:39.267 EAL: Detected NUMA nodes: 2 00:03:39.267 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:39.267 EAL: Detected shared linkage of DPDK 00:03:39.267 EAL: No shared files mode enabled, IPC will be disabled 00:03:39.267 EAL: Bus pci wants IOVA as 'DC' 00:03:39.267 EAL: Buses did not request a specific IOVA mode. 00:03:39.267 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:39.267 EAL: Selected IOVA mode 'VA' 00:03:39.267 EAL: No free 2048 kB hugepages reported on node 1 00:03:39.267 EAL: Probing VFIO support... 00:03:39.267 EAL: IOMMU type 1 (Type 1) is supported 00:03:39.267 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:39.267 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:39.267 EAL: VFIO support initialized 00:03:39.267 EAL: Ask a virtual area of 0x2e000 bytes 00:03:39.267 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:39.267 EAL: Setting up physically contiguous memory... 00:03:39.267 EAL: Setting maximum number of open files to 524288 00:03:39.267 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:39.267 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:39.267 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:39.267 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:39.267 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.267 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:39.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.267 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.267 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:39.267 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:39.267 EAL: Hugepages will be freed exactly as allocated. 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: TSC frequency is ~2700000 KHz 00:03:39.267 EAL: Main lcore 0 is ready (tid=7f1fa701ea00;cpuset=[0]) 00:03:39.267 EAL: Trying to obtain current memory policy. 00:03:39.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.267 EAL: Restoring previous memory policy: 0 00:03:39.267 EAL: request: mp_malloc_sync 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: Heap on socket 0 was expanded by 2MB 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:39.267 EAL: Mem event callback 'spdk:(nil)' registered 00:03:39.267 00:03:39.267 00:03:39.267 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.267 http://cunit.sourceforge.net/ 00:03:39.267 00:03:39.267 00:03:39.267 Suite: components_suite 00:03:39.267 Test: vtophys_malloc_test ...passed 00:03:39.267 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:39.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.267 EAL: Restoring previous memory policy: 4 00:03:39.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.267 EAL: request: mp_malloc_sync 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: Heap on socket 0 was expanded by 4MB 00:03:39.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.267 EAL: request: mp_malloc_sync 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: Heap on socket 0 was shrunk by 4MB 00:03:39.267 EAL: Trying to obtain current memory policy. 00:03:39.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.267 EAL: Restoring previous memory policy: 4 00:03:39.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.267 EAL: request: mp_malloc_sync 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: Heap on socket 0 was expanded by 6MB 00:03:39.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.267 EAL: request: mp_malloc_sync 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: Heap on socket 0 was shrunk by 6MB 00:03:39.267 EAL: Trying to obtain current memory policy. 00:03:39.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.267 EAL: Restoring previous memory policy: 4 00:03:39.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.267 EAL: request: mp_malloc_sync 00:03:39.267 EAL: No shared files mode enabled, IPC is disabled 00:03:39.267 EAL: Heap on socket 0 was expanded by 10MB 00:03:39.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.268 EAL: request: mp_malloc_sync 00:03:39.268 EAL: No shared files mode enabled, IPC is disabled 00:03:39.268 EAL: Heap on socket 0 was shrunk by 10MB 00:03:39.268 EAL: Trying to obtain current memory policy. 00:03:39.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.268 EAL: Restoring previous memory policy: 4 00:03:39.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.268 EAL: request: mp_malloc_sync 00:03:39.268 EAL: No shared files mode enabled, IPC is disabled 00:03:39.268 EAL: Heap on socket 0 was expanded by 18MB 00:03:39.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.268 EAL: request: mp_malloc_sync 00:03:39.268 EAL: No shared files mode enabled, IPC is disabled 00:03:39.268 EAL: Heap on socket 0 was shrunk by 18MB 00:03:39.268 EAL: Trying to obtain current memory policy. 00:03:39.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.268 EAL: Restoring previous memory policy: 4 00:03:39.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.268 EAL: request: mp_malloc_sync 00:03:39.268 EAL: No shared files mode enabled, IPC is disabled 00:03:39.268 EAL: Heap on socket 0 was expanded by 34MB 00:03:39.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.268 EAL: request: mp_malloc_sync 00:03:39.268 EAL: No shared files mode enabled, IPC is disabled 00:03:39.268 EAL: Heap on socket 0 was shrunk by 34MB 00:03:39.268 EAL: Trying to obtain current memory policy. 00:03:39.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.268 EAL: Restoring previous memory policy: 4 00:03:39.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.268 EAL: request: mp_malloc_sync 00:03:39.268 EAL: No shared files mode enabled, IPC is disabled 00:03:39.268 EAL: Heap on socket 0 was expanded by 66MB 00:03:39.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.268 EAL: request: mp_malloc_sync 00:03:39.268 EAL: No shared files mode enabled, IPC is disabled 00:03:39.268 EAL: Heap on socket 0 was shrunk by 66MB 00:03:39.268 EAL: Trying to obtain current memory policy. 00:03:39.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.528 EAL: Restoring previous memory policy: 4 00:03:39.528 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.528 EAL: request: mp_malloc_sync 00:03:39.528 EAL: No shared files mode enabled, IPC is disabled 00:03:39.528 EAL: Heap on socket 0 was expanded by 130MB 00:03:39.528 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.528 EAL: request: mp_malloc_sync 00:03:39.528 EAL: No shared files mode enabled, IPC is disabled 00:03:39.528 EAL: Heap on socket 0 was shrunk by 130MB 00:03:39.528 EAL: Trying to obtain current memory policy. 00:03:39.528 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.528 EAL: Restoring previous memory policy: 4 00:03:39.528 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.528 EAL: request: mp_malloc_sync 00:03:39.528 EAL: No shared files mode enabled, IPC is disabled 00:03:39.528 EAL: Heap on socket 0 was expanded by 258MB 00:03:39.528 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.528 EAL: request: mp_malloc_sync 00:03:39.528 EAL: No shared files mode enabled, IPC is disabled 00:03:39.528 EAL: Heap on socket 0 was shrunk by 258MB 00:03:39.528 EAL: Trying to obtain current memory policy. 00:03:39.528 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.821 EAL: Restoring previous memory policy: 4 00:03:39.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.821 EAL: request: mp_malloc_sync 00:03:39.821 EAL: No shared files mode enabled, IPC is disabled 00:03:39.821 EAL: Heap on socket 0 was expanded by 514MB 00:03:39.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.079 EAL: request: mp_malloc_sync 00:03:40.079 EAL: No shared files mode enabled, IPC is disabled 00:03:40.079 EAL: Heap on socket 0 was shrunk by 514MB 00:03:40.079 EAL: Trying to obtain current memory policy. 00:03:40.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.337 EAL: Restoring previous memory policy: 4 00:03:40.337 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.337 EAL: request: mp_malloc_sync 00:03:40.337 EAL: No shared files mode enabled, IPC is disabled 00:03:40.337 EAL: Heap on socket 0 was expanded by 1026MB 00:03:40.595 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.853 EAL: request: mp_malloc_sync 00:03:40.853 EAL: No shared files mode enabled, IPC is disabled 00:03:40.853 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:40.853 passed 00:03:40.853 00:03:40.853 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.853 suites 1 1 n/a 0 0 00:03:40.853 tests 2 2 2 0 0 00:03:40.853 asserts 497 497 497 0 n/a 00:03:40.853 00:03:40.853 Elapsed time = 1.386 seconds 00:03:40.853 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.853 EAL: request: mp_malloc_sync 00:03:40.853 EAL: No shared files mode enabled, IPC is disabled 00:03:40.853 EAL: Heap on socket 0 was shrunk by 2MB 00:03:40.853 EAL: No shared files mode enabled, IPC is disabled 00:03:40.853 EAL: No shared files mode enabled, IPC is disabled 00:03:40.853 EAL: No shared files mode enabled, IPC is disabled 00:03:40.853 00:03:40.853 real 0m1.506s 00:03:40.853 user 0m0.867s 00:03:40.853 sys 0m0.604s 00:03:40.853 20:31:36 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.853 20:31:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:40.853 ************************************ 00:03:40.853 END TEST env_vtophys 00:03:40.853 ************************************ 00:03:40.853 20:31:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.853 20:31:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.853 20:31:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.853 20:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.853 ************************************ 00:03:40.853 START TEST env_pci 00:03:40.853 ************************************ 00:03:40.853 20:31:36 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.853 00:03:40.853 00:03:40.853 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.853 http://cunit.sourceforge.net/ 00:03:40.853 00:03:40.853 00:03:40.853 Suite: pci 00:03:40.853 Test: pci_hook ...[2024-07-24 20:31:36.231657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1466576 has claimed it 00:03:40.853 EAL: Cannot find device (10000:00:01.0) 00:03:40.853 EAL: Failed to attach device on primary process 00:03:40.853 passed 00:03:40.853 00:03:40.853 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.853 suites 1 1 n/a 0 0 00:03:40.853 tests 1 1 1 0 0 00:03:40.853 asserts 25 25 25 0 n/a 00:03:40.853 00:03:40.853 Elapsed time = 0.019 seconds 00:03:40.853 00:03:40.853 real 0m0.029s 00:03:40.853 user 0m0.013s 00:03:40.854 sys 0m0.016s 00:03:40.854 20:31:36 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.854 20:31:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:40.854 ************************************ 00:03:40.854 END TEST env_pci 00:03:40.854 ************************************ 00:03:40.854 20:31:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:40.854 20:31:36 env -- env/env.sh@15 -- # uname 00:03:40.854 20:31:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:40.854 20:31:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:40.854 20:31:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.854 20:31:36 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:40.854 20:31:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.854 20:31:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.854 ************************************ 00:03:40.854 START TEST env_dpdk_post_init 00:03:40.854 ************************************ 00:03:40.854 20:31:36 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.854 EAL: Detected CPU lcores: 48 00:03:40.854 EAL: Detected NUMA nodes: 2 00:03:40.854 EAL: Detected shared linkage of DPDK 00:03:40.854 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.854 EAL: Selected IOVA mode 'VA' 00:03:40.854 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.854 EAL: VFIO support initialized 00:03:40.854 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.854 EAL: Using IOMMU type 1 (Type 1) 00:03:40.854 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:41.112 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:42.046 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:45.325 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:45.325 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:45.325 Starting DPDK initialization... 00:03:45.325 Starting SPDK post initialization... 00:03:45.325 SPDK NVMe probe 00:03:45.325 Attaching to 0000:88:00.0 00:03:45.325 Attached to 0000:88:00.0 00:03:45.325 Cleaning up... 00:03:45.325 00:03:45.325 real 0m4.403s 00:03:45.325 user 0m3.279s 00:03:45.325 sys 0m0.178s 00:03:45.325 20:31:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.325 20:31:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.325 ************************************ 00:03:45.325 END TEST env_dpdk_post_init 00:03:45.325 ************************************ 00:03:45.325 20:31:40 env -- env/env.sh@26 -- # uname 00:03:45.325 20:31:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:45.325 20:31:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.325 20:31:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.325 20:31:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.325 20:31:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.325 ************************************ 00:03:45.325 START TEST env_mem_callbacks 00:03:45.325 ************************************ 00:03:45.325 20:31:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.325 EAL: Detected CPU lcores: 48 00:03:45.325 EAL: Detected NUMA nodes: 2 00:03:45.325 EAL: Detected shared linkage of DPDK 00:03:45.326 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.326 EAL: Selected IOVA mode 'VA' 00:03:45.326 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.326 EAL: VFIO support initialized 00:03:45.326 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.326 00:03:45.326 00:03:45.326 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.326 http://cunit.sourceforge.net/ 00:03:45.326 00:03:45.326 00:03:45.326 Suite: memory 00:03:45.326 Test: test ... 00:03:45.326 register 0x200000200000 2097152 00:03:45.326 malloc 3145728 00:03:45.326 register 0x200000400000 4194304 00:03:45.326 buf 0x200000500000 len 3145728 PASSED 00:03:45.326 malloc 64 00:03:45.326 buf 0x2000004fff40 len 64 PASSED 00:03:45.326 malloc 4194304 00:03:45.326 register 0x200000800000 6291456 00:03:45.326 buf 0x200000a00000 len 4194304 PASSED 00:03:45.326 free 0x200000500000 3145728 00:03:45.326 free 0x2000004fff40 64 00:03:45.326 unregister 0x200000400000 4194304 PASSED 00:03:45.326 free 0x200000a00000 4194304 00:03:45.326 unregister 0x200000800000 6291456 PASSED 00:03:45.326 malloc 8388608 00:03:45.326 register 0x200000400000 10485760 00:03:45.326 buf 0x200000600000 len 8388608 PASSED 00:03:45.326 free 0x200000600000 8388608 00:03:45.326 unregister 0x200000400000 10485760 PASSED 00:03:45.326 passed 00:03:45.326 00:03:45.326 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.326 suites 1 1 n/a 0 0 00:03:45.326 tests 1 1 1 0 0 00:03:45.326 asserts 15 15 15 0 n/a 00:03:45.326 00:03:45.326 Elapsed time = 0.005 seconds 00:03:45.326 00:03:45.326 real 0m0.049s 00:03:45.326 user 0m0.018s 00:03:45.326 sys 0m0.031s 00:03:45.326 20:31:40 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.326 20:31:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:45.326 ************************************ 00:03:45.326 END TEST env_mem_callbacks 00:03:45.326 ************************************ 00:03:45.326 00:03:45.326 real 0m6.427s 00:03:45.326 user 0m4.443s 00:03:45.326 sys 0m1.020s 00:03:45.326 20:31:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.326 20:31:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.326 ************************************ 00:03:45.326 END TEST env 00:03:45.326 ************************************ 00:03:45.326 20:31:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:45.326 20:31:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.326 20:31:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.326 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:45.326 ************************************ 00:03:45.326 START TEST rpc 00:03:45.326 ************************************ 00:03:45.326 20:31:40 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:45.584 * Looking for test storage... 00:03:45.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:45.584 20:31:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1467227 00:03:45.584 20:31:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:45.584 20:31:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.584 20:31:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1467227 00:03:45.584 20:31:40 rpc -- common/autotest_common.sh@831 -- # '[' -z 1467227 ']' 00:03:45.584 20:31:40 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.584 20:31:40 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:45.584 20:31:40 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.584 20:31:40 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:45.584 20:31:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.584 [2024-07-24 20:31:40.973643] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:03:45.584 [2024-07-24 20:31:40.973727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467227 ] 00:03:45.584 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.584 [2024-07-24 20:31:41.030176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.584 [2024-07-24 20:31:41.135530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:45.584 [2024-07-24 20:31:41.135607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1467227' to capture a snapshot of events at runtime. 00:03:45.584 [2024-07-24 20:31:41.135620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:45.584 [2024-07-24 20:31:41.135631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:45.584 [2024-07-24 20:31:41.135640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1467227 for offline analysis/debug. 00:03:45.584 [2024-07-24 20:31:41.135668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.842 20:31:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:45.842 20:31:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:45.842 20:31:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:45.842 20:31:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:45.842 20:31:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:45.842 20:31:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:45.842 20:31:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.842 20:31:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.842 20:31:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.100 ************************************ 00:03:46.100 START TEST rpc_integrity 00:03:46.100 ************************************ 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.100 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.100 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.100 { 00:03:46.100 "name": "Malloc0", 00:03:46.100 "aliases": [ 00:03:46.100 "4a6d949f-1301-4b14-bb7c-f174268bbcf1" 00:03:46.100 ], 00:03:46.100 "product_name": "Malloc disk", 00:03:46.100 "block_size": 512, 00:03:46.100 "num_blocks": 16384, 00:03:46.100 "uuid": "4a6d949f-1301-4b14-bb7c-f174268bbcf1", 00:03:46.100 "assigned_rate_limits": { 00:03:46.100 "rw_ios_per_sec": 0, 00:03:46.100 "rw_mbytes_per_sec": 0, 00:03:46.100 "r_mbytes_per_sec": 0, 00:03:46.100 "w_mbytes_per_sec": 0 00:03:46.100 }, 00:03:46.100 "claimed": false, 00:03:46.100 "zoned": false, 00:03:46.100 "supported_io_types": { 00:03:46.100 "read": true, 00:03:46.100 "write": true, 00:03:46.100 "unmap": true, 00:03:46.100 "flush": true, 00:03:46.100 "reset": true, 00:03:46.100 "nvme_admin": false, 00:03:46.100 "nvme_io": false, 00:03:46.100 "nvme_io_md": false, 00:03:46.100 "write_zeroes": true, 00:03:46.100 "zcopy": true, 00:03:46.100 "get_zone_info": false, 00:03:46.100 "zone_management": false, 00:03:46.100 "zone_append": false, 00:03:46.100 "compare": false, 00:03:46.100 "compare_and_write": false, 00:03:46.100 "abort": true, 00:03:46.100 "seek_hole": false, 00:03:46.100 "seek_data": false, 00:03:46.100 "copy": true, 00:03:46.100 "nvme_iov_md": false 00:03:46.100 }, 00:03:46.100 "memory_domains": [ 00:03:46.100 { 00:03:46.100 "dma_device_id": "system", 00:03:46.100 "dma_device_type": 1 00:03:46.100 }, 00:03:46.100 { 00:03:46.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.101 "dma_device_type": 2 00:03:46.101 } 00:03:46.101 ], 00:03:46.101 "driver_specific": {} 00:03:46.101 } 00:03:46.101 ]' 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.101 [2024-07-24 20:31:41.521767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:46.101 [2024-07-24 20:31:41.521813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.101 [2024-07-24 20:31:41.521837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1accd50 00:03:46.101 [2024-07-24 20:31:41.521853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.101 [2024-07-24 20:31:41.523280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.101 [2024-07-24 20:31:41.523308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.101 Passthru0 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.101 { 00:03:46.101 "name": "Malloc0", 00:03:46.101 "aliases": [ 00:03:46.101 "4a6d949f-1301-4b14-bb7c-f174268bbcf1" 00:03:46.101 ], 00:03:46.101 "product_name": "Malloc disk", 00:03:46.101 "block_size": 512, 00:03:46.101 "num_blocks": 16384, 00:03:46.101 "uuid": "4a6d949f-1301-4b14-bb7c-f174268bbcf1", 00:03:46.101 "assigned_rate_limits": { 00:03:46.101 "rw_ios_per_sec": 0, 00:03:46.101 "rw_mbytes_per_sec": 0, 00:03:46.101 "r_mbytes_per_sec": 0, 00:03:46.101 "w_mbytes_per_sec": 0 00:03:46.101 }, 00:03:46.101 "claimed": true, 00:03:46.101 "claim_type": "exclusive_write", 00:03:46.101 "zoned": false, 00:03:46.101 "supported_io_types": { 00:03:46.101 "read": true, 00:03:46.101 "write": true, 00:03:46.101 "unmap": true, 00:03:46.101 "flush": true, 00:03:46.101 "reset": true, 00:03:46.101 "nvme_admin": false, 00:03:46.101 "nvme_io": false, 00:03:46.101 "nvme_io_md": false, 00:03:46.101 "write_zeroes": true, 00:03:46.101 "zcopy": true, 00:03:46.101 "get_zone_info": false, 00:03:46.101 "zone_management": false, 00:03:46.101 "zone_append": false, 00:03:46.101 "compare": false, 00:03:46.101 "compare_and_write": false, 00:03:46.101 "abort": true, 00:03:46.101 "seek_hole": false, 00:03:46.101 "seek_data": false, 00:03:46.101 "copy": true, 00:03:46.101 "nvme_iov_md": false 00:03:46.101 }, 00:03:46.101 "memory_domains": [ 00:03:46.101 { 00:03:46.101 "dma_device_id": "system", 00:03:46.101 "dma_device_type": 1 00:03:46.101 }, 00:03:46.101 { 00:03:46.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.101 "dma_device_type": 2 00:03:46.101 } 00:03:46.101 ], 00:03:46.101 "driver_specific": {} 00:03:46.101 }, 00:03:46.101 { 00:03:46.101 "name": "Passthru0", 00:03:46.101 "aliases": [ 00:03:46.101 "0d6f64cc-cfd6-504f-8885-1c6aca9eb569" 00:03:46.101 ], 00:03:46.101 "product_name": "passthru", 00:03:46.101 "block_size": 512, 00:03:46.101 "num_blocks": 16384, 00:03:46.101 "uuid": "0d6f64cc-cfd6-504f-8885-1c6aca9eb569", 00:03:46.101 "assigned_rate_limits": { 00:03:46.101 "rw_ios_per_sec": 0, 00:03:46.101 "rw_mbytes_per_sec": 0, 00:03:46.101 "r_mbytes_per_sec": 0, 00:03:46.101 "w_mbytes_per_sec": 0 00:03:46.101 }, 00:03:46.101 "claimed": false, 00:03:46.101 "zoned": false, 00:03:46.101 "supported_io_types": { 00:03:46.101 "read": true, 00:03:46.101 "write": true, 00:03:46.101 "unmap": true, 00:03:46.101 "flush": true, 00:03:46.101 "reset": true, 00:03:46.101 "nvme_admin": false, 00:03:46.101 "nvme_io": false, 00:03:46.101 "nvme_io_md": false, 00:03:46.101 "write_zeroes": true, 00:03:46.101 "zcopy": true, 00:03:46.101 "get_zone_info": false, 00:03:46.101 "zone_management": false, 00:03:46.101 "zone_append": false, 00:03:46.101 "compare": false, 00:03:46.101 "compare_and_write": false, 00:03:46.101 "abort": true, 00:03:46.101 "seek_hole": false, 00:03:46.101 "seek_data": false, 00:03:46.101 "copy": true, 00:03:46.101 "nvme_iov_md": false 00:03:46.101 }, 00:03:46.101 "memory_domains": [ 00:03:46.101 { 00:03:46.101 "dma_device_id": "system", 00:03:46.101 "dma_device_type": 1 00:03:46.101 }, 00:03:46.101 { 00:03:46.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.101 "dma_device_type": 2 00:03:46.101 } 00:03:46.101 ], 00:03:46.101 "driver_specific": { 00:03:46.101 "passthru": { 00:03:46.101 "name": "Passthru0", 00:03:46.101 "base_bdev_name": "Malloc0" 00:03:46.101 } 00:03:46.101 } 00:03:46.101 } 00:03:46.101 ]' 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:46.101 20:31:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.101 00:03:46.101 real 0m0.225s 00:03:46.101 user 0m0.150s 00:03:46.101 sys 0m0.019s 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.101 20:31:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.101 ************************************ 00:03:46.101 END TEST rpc_integrity 00:03:46.101 ************************************ 00:03:46.101 20:31:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:46.101 20:31:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.101 20:31:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.101 20:31:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.360 ************************************ 00:03:46.360 START TEST rpc_plugins 00:03:46.360 ************************************ 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:46.360 { 00:03:46.360 "name": "Malloc1", 00:03:46.360 "aliases": [ 00:03:46.360 "d1b2a18b-92e3-4983-9f91-24d45b1db779" 00:03:46.360 ], 00:03:46.360 "product_name": "Malloc disk", 00:03:46.360 "block_size": 4096, 00:03:46.360 "num_blocks": 256, 00:03:46.360 "uuid": "d1b2a18b-92e3-4983-9f91-24d45b1db779", 00:03:46.360 "assigned_rate_limits": { 00:03:46.360 "rw_ios_per_sec": 0, 00:03:46.360 "rw_mbytes_per_sec": 0, 00:03:46.360 "r_mbytes_per_sec": 0, 00:03:46.360 "w_mbytes_per_sec": 0 00:03:46.360 }, 00:03:46.360 "claimed": false, 00:03:46.360 "zoned": false, 00:03:46.360 "supported_io_types": { 00:03:46.360 "read": true, 00:03:46.360 "write": true, 00:03:46.360 "unmap": true, 00:03:46.360 "flush": true, 00:03:46.360 "reset": true, 00:03:46.360 "nvme_admin": false, 00:03:46.360 "nvme_io": false, 00:03:46.360 "nvme_io_md": false, 00:03:46.360 "write_zeroes": true, 00:03:46.360 "zcopy": true, 00:03:46.360 "get_zone_info": false, 00:03:46.360 "zone_management": false, 00:03:46.360 "zone_append": false, 00:03:46.360 "compare": false, 00:03:46.360 "compare_and_write": false, 00:03:46.360 "abort": true, 00:03:46.360 "seek_hole": false, 00:03:46.360 "seek_data": false, 00:03:46.360 "copy": true, 00:03:46.360 "nvme_iov_md": false 00:03:46.360 }, 00:03:46.360 "memory_domains": [ 00:03:46.360 { 00:03:46.360 "dma_device_id": "system", 00:03:46.360 "dma_device_type": 1 00:03:46.360 }, 00:03:46.360 { 00:03:46.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.360 "dma_device_type": 2 00:03:46.360 } 00:03:46.360 ], 00:03:46.360 "driver_specific": {} 00:03:46.360 } 00:03:46.360 ]' 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:46.360 20:31:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:46.360 00:03:46.360 real 0m0.114s 00:03:46.360 user 0m0.073s 00:03:46.360 sys 0m0.011s 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.360 20:31:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.360 ************************************ 00:03:46.360 END TEST rpc_plugins 00:03:46.360 ************************************ 00:03:46.360 20:31:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:46.360 20:31:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.360 20:31:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.360 20:31:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.360 ************************************ 00:03:46.360 START TEST rpc_trace_cmd_test 00:03:46.360 ************************************ 00:03:46.360 20:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:46.360 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:46.360 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:46.361 20:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.361 20:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:46.361 20:31:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.361 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:46.361 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1467227", 00:03:46.361 "tpoint_group_mask": "0x8", 00:03:46.361 "iscsi_conn": { 00:03:46.361 "mask": "0x2", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "scsi": { 00:03:46.361 "mask": "0x4", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "bdev": { 00:03:46.361 "mask": "0x8", 00:03:46.361 "tpoint_mask": "0xffffffffffffffff" 00:03:46.361 }, 00:03:46.361 "nvmf_rdma": { 00:03:46.361 "mask": "0x10", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "nvmf_tcp": { 00:03:46.361 "mask": "0x20", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "ftl": { 00:03:46.361 "mask": "0x40", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "blobfs": { 00:03:46.361 "mask": "0x80", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "dsa": { 00:03:46.361 "mask": "0x200", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "thread": { 00:03:46.361 "mask": "0x400", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "nvme_pcie": { 00:03:46.361 "mask": "0x800", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "iaa": { 00:03:46.361 "mask": "0x1000", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "nvme_tcp": { 00:03:46.361 "mask": "0x2000", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "bdev_nvme": { 00:03:46.361 "mask": "0x4000", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 }, 00:03:46.361 "sock": { 00:03:46.361 "mask": "0x8000", 00:03:46.361 "tpoint_mask": "0x0" 00:03:46.361 } 00:03:46.361 }' 00:03:46.361 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:46.361 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:46.361 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:46.619 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:46.619 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:46.619 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:46.619 20:31:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:46.619 20:31:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:46.619 20:31:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:46.619 20:31:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:46.619 00:03:46.619 real 0m0.202s 00:03:46.619 user 0m0.178s 00:03:46.619 sys 0m0.017s 00:03:46.619 20:31:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.619 20:31:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:46.619 ************************************ 00:03:46.619 END TEST rpc_trace_cmd_test 00:03:46.619 ************************************ 00:03:46.619 20:31:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:46.619 20:31:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:46.619 20:31:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:46.619 20:31:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.619 20:31:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.619 20:31:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.619 ************************************ 00:03:46.619 START TEST rpc_daemon_integrity 00:03:46.619 ************************************ 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.619 { 00:03:46.619 "name": "Malloc2", 00:03:46.619 "aliases": [ 00:03:46.619 "61c9ec24-c98b-4609-8671-3edc026e1f0a" 00:03:46.619 ], 00:03:46.619 "product_name": "Malloc disk", 00:03:46.619 "block_size": 512, 00:03:46.619 "num_blocks": 16384, 00:03:46.619 "uuid": "61c9ec24-c98b-4609-8671-3edc026e1f0a", 00:03:46.619 "assigned_rate_limits": { 00:03:46.619 "rw_ios_per_sec": 0, 00:03:46.619 "rw_mbytes_per_sec": 0, 00:03:46.619 "r_mbytes_per_sec": 0, 00:03:46.619 "w_mbytes_per_sec": 0 00:03:46.619 }, 00:03:46.619 "claimed": false, 00:03:46.619 "zoned": false, 00:03:46.619 "supported_io_types": { 00:03:46.619 "read": true, 00:03:46.619 "write": true, 00:03:46.619 "unmap": true, 00:03:46.619 "flush": true, 00:03:46.619 "reset": true, 00:03:46.619 "nvme_admin": false, 00:03:46.619 "nvme_io": false, 00:03:46.619 "nvme_io_md": false, 00:03:46.619 "write_zeroes": true, 00:03:46.619 "zcopy": true, 00:03:46.619 "get_zone_info": false, 00:03:46.619 "zone_management": false, 00:03:46.619 "zone_append": false, 00:03:46.619 "compare": false, 00:03:46.619 "compare_and_write": false, 00:03:46.619 "abort": true, 00:03:46.619 "seek_hole": false, 00:03:46.619 "seek_data": false, 00:03:46.619 "copy": true, 00:03:46.619 "nvme_iov_md": false 00:03:46.619 }, 00:03:46.619 "memory_domains": [ 00:03:46.619 { 00:03:46.619 "dma_device_id": "system", 00:03:46.619 "dma_device_type": 1 00:03:46.619 }, 00:03:46.619 { 00:03:46.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.619 "dma_device_type": 2 00:03:46.619 } 00:03:46.619 ], 00:03:46.619 "driver_specific": {} 00:03:46.619 } 00:03:46.619 ]' 00:03:46.619 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.877 [2024-07-24 20:31:42.199764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:46.877 [2024-07-24 20:31:42.199809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.877 [2024-07-24 20:31:42.199839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1acdc00 00:03:46.877 [2024-07-24 20:31:42.199856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.877 [2024-07-24 20:31:42.201211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.877 [2024-07-24 20:31:42.201249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.877 Passthru0 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.877 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.877 { 00:03:46.877 "name": "Malloc2", 00:03:46.877 "aliases": [ 00:03:46.877 "61c9ec24-c98b-4609-8671-3edc026e1f0a" 00:03:46.877 ], 00:03:46.877 "product_name": "Malloc disk", 00:03:46.877 "block_size": 512, 00:03:46.877 "num_blocks": 16384, 00:03:46.877 "uuid": "61c9ec24-c98b-4609-8671-3edc026e1f0a", 00:03:46.877 "assigned_rate_limits": { 00:03:46.877 "rw_ios_per_sec": 0, 00:03:46.877 "rw_mbytes_per_sec": 0, 00:03:46.877 "r_mbytes_per_sec": 0, 00:03:46.877 "w_mbytes_per_sec": 0 00:03:46.877 }, 00:03:46.877 "claimed": true, 00:03:46.877 "claim_type": "exclusive_write", 00:03:46.877 "zoned": false, 00:03:46.877 "supported_io_types": { 00:03:46.877 "read": true, 00:03:46.877 "write": true, 00:03:46.877 "unmap": true, 00:03:46.877 "flush": true, 00:03:46.877 "reset": true, 00:03:46.877 "nvme_admin": false, 00:03:46.877 "nvme_io": false, 00:03:46.877 "nvme_io_md": false, 00:03:46.877 "write_zeroes": true, 00:03:46.877 "zcopy": true, 00:03:46.877 "get_zone_info": false, 00:03:46.877 "zone_management": false, 00:03:46.877 "zone_append": false, 00:03:46.877 "compare": false, 00:03:46.877 "compare_and_write": false, 00:03:46.877 "abort": true, 00:03:46.877 "seek_hole": false, 00:03:46.877 "seek_data": false, 00:03:46.877 "copy": true, 00:03:46.877 "nvme_iov_md": false 00:03:46.877 }, 00:03:46.877 "memory_domains": [ 00:03:46.877 { 00:03:46.877 "dma_device_id": "system", 00:03:46.877 "dma_device_type": 1 00:03:46.877 }, 00:03:46.877 { 00:03:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.877 "dma_device_type": 2 00:03:46.877 } 00:03:46.877 ], 00:03:46.877 "driver_specific": {} 00:03:46.877 }, 00:03:46.877 { 00:03:46.877 "name": "Passthru0", 00:03:46.877 "aliases": [ 00:03:46.877 "abe559f6-6036-5a04-b641-45808034738d" 00:03:46.877 ], 00:03:46.877 "product_name": "passthru", 00:03:46.877 "block_size": 512, 00:03:46.877 "num_blocks": 16384, 00:03:46.877 "uuid": "abe559f6-6036-5a04-b641-45808034738d", 00:03:46.877 "assigned_rate_limits": { 00:03:46.877 "rw_ios_per_sec": 0, 00:03:46.877 "rw_mbytes_per_sec": 0, 00:03:46.877 "r_mbytes_per_sec": 0, 00:03:46.877 "w_mbytes_per_sec": 0 00:03:46.877 }, 00:03:46.877 "claimed": false, 00:03:46.877 "zoned": false, 00:03:46.877 "supported_io_types": { 00:03:46.877 "read": true, 00:03:46.877 "write": true, 00:03:46.877 "unmap": true, 00:03:46.877 "flush": true, 00:03:46.877 "reset": true, 00:03:46.877 "nvme_admin": false, 00:03:46.877 "nvme_io": false, 00:03:46.877 "nvme_io_md": false, 00:03:46.877 "write_zeroes": true, 00:03:46.877 "zcopy": true, 00:03:46.877 "get_zone_info": false, 00:03:46.877 "zone_management": false, 00:03:46.877 "zone_append": false, 00:03:46.877 "compare": false, 00:03:46.877 "compare_and_write": false, 00:03:46.877 "abort": true, 00:03:46.877 "seek_hole": false, 00:03:46.877 "seek_data": false, 00:03:46.877 "copy": true, 00:03:46.877 "nvme_iov_md": false 00:03:46.877 }, 00:03:46.877 "memory_domains": [ 00:03:46.877 { 00:03:46.877 "dma_device_id": "system", 00:03:46.877 "dma_device_type": 1 00:03:46.877 }, 00:03:46.877 { 00:03:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.877 "dma_device_type": 2 00:03:46.877 } 00:03:46.877 ], 00:03:46.877 "driver_specific": { 00:03:46.877 "passthru": { 00:03:46.877 "name": "Passthru0", 00:03:46.878 "base_bdev_name": "Malloc2" 00:03:46.878 } 00:03:46.878 } 00:03:46.878 } 00:03:46.878 ]' 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.878 00:03:46.878 real 0m0.226s 00:03:46.878 user 0m0.148s 00:03:46.878 sys 0m0.025s 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.878 20:31:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.878 ************************************ 00:03:46.878 END TEST rpc_daemon_integrity 00:03:46.878 ************************************ 00:03:46.878 20:31:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:46.878 20:31:42 rpc -- rpc/rpc.sh@84 -- # killprocess 1467227 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@950 -- # '[' -z 1467227 ']' 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@954 -- # kill -0 1467227 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@955 -- # uname 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467227 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467227' 00:03:46.878 killing process with pid 1467227 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@969 -- # kill 1467227 00:03:46.878 20:31:42 rpc -- common/autotest_common.sh@974 -- # wait 1467227 00:03:47.444 00:03:47.444 real 0m1.932s 00:03:47.444 user 0m2.422s 00:03:47.444 sys 0m0.578s 00:03:47.444 20:31:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.444 20:31:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.444 ************************************ 00:03:47.444 END TEST rpc 00:03:47.444 ************************************ 00:03:47.444 20:31:42 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:47.444 20:31:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.444 20:31:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.444 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:03:47.444 ************************************ 00:03:47.444 START TEST skip_rpc 00:03:47.444 ************************************ 00:03:47.444 20:31:42 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:47.444 * Looking for test storage... 00:03:47.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.444 20:31:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:47.444 20:31:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.444 20:31:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:47.444 20:31:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.444 20:31:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.444 20:31:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.444 ************************************ 00:03:47.444 START TEST skip_rpc 00:03:47.444 ************************************ 00:03:47.444 20:31:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:47.444 20:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1467664 00:03:47.444 20:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:47.444 20:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.444 20:31:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:47.444 [2024-07-24 20:31:42.976902] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:03:47.444 [2024-07-24 20:31:42.976970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467664 ] 00:03:47.444 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.702 [2024-07-24 20:31:43.037865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.702 [2024-07-24 20:31:43.154318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1467664 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1467664 ']' 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1467664 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467664 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467664' 00:03:52.958 killing process with pid 1467664 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1467664 00:03:52.958 20:31:47 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1467664 00:03:52.958 00:03:52.958 real 0m5.487s 00:03:52.958 user 0m5.159s 00:03:52.958 sys 0m0.332s 00:03:52.958 20:31:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.958 20:31:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.958 ************************************ 00:03:52.958 END TEST skip_rpc 00:03:52.958 ************************************ 00:03:52.958 20:31:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:52.958 20:31:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.958 20:31:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.958 20:31:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.958 ************************************ 00:03:52.958 START TEST skip_rpc_with_json 00:03:52.958 ************************************ 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1468356 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1468356 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1468356 ']' 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:52.958 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.958 [2024-07-24 20:31:48.513337] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:03:52.958 [2024-07-24 20:31:48.513418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468356 ] 00:03:53.216 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.216 [2024-07-24 20:31:48.571318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.216 [2024-07-24 20:31:48.681574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.474 [2024-07-24 20:31:48.942992] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:53.474 request: 00:03:53.474 { 00:03:53.474 "trtype": "tcp", 00:03:53.474 "method": "nvmf_get_transports", 00:03:53.474 "req_id": 1 00:03:53.474 } 00:03:53.474 Got JSON-RPC error response 00:03:53.474 response: 00:03:53.474 { 00:03:53.474 "code": -19, 00:03:53.474 "message": "No such device" 00:03:53.474 } 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.474 [2024-07-24 20:31:48.951130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.474 20:31:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.733 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.733 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:53.733 { 00:03:53.733 "subsystems": [ 00:03:53.733 { 00:03:53.733 "subsystem": "vfio_user_target", 00:03:53.733 "config": null 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "keyring", 00:03:53.733 "config": [] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "iobuf", 00:03:53.733 "config": [ 00:03:53.733 { 00:03:53.733 "method": "iobuf_set_options", 00:03:53.733 "params": { 00:03:53.733 "small_pool_count": 8192, 00:03:53.733 "large_pool_count": 1024, 00:03:53.733 "small_bufsize": 8192, 00:03:53.733 "large_bufsize": 135168 00:03:53.733 } 00:03:53.733 } 00:03:53.733 ] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "sock", 00:03:53.733 "config": [ 00:03:53.733 { 00:03:53.733 "method": "sock_set_default_impl", 00:03:53.733 "params": { 00:03:53.733 "impl_name": "posix" 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "sock_impl_set_options", 00:03:53.733 "params": { 00:03:53.733 "impl_name": "ssl", 00:03:53.733 "recv_buf_size": 4096, 00:03:53.733 "send_buf_size": 4096, 00:03:53.733 "enable_recv_pipe": true, 00:03:53.733 "enable_quickack": false, 00:03:53.733 "enable_placement_id": 0, 00:03:53.733 "enable_zerocopy_send_server": true, 00:03:53.733 "enable_zerocopy_send_client": false, 00:03:53.733 "zerocopy_threshold": 0, 00:03:53.733 "tls_version": 0, 00:03:53.733 "enable_ktls": false 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "sock_impl_set_options", 00:03:53.733 "params": { 00:03:53.733 "impl_name": "posix", 00:03:53.733 "recv_buf_size": 2097152, 00:03:53.733 "send_buf_size": 2097152, 00:03:53.733 "enable_recv_pipe": true, 00:03:53.733 "enable_quickack": false, 00:03:53.733 "enable_placement_id": 0, 00:03:53.733 "enable_zerocopy_send_server": true, 00:03:53.733 "enable_zerocopy_send_client": false, 00:03:53.733 "zerocopy_threshold": 0, 00:03:53.733 "tls_version": 0, 00:03:53.733 "enable_ktls": false 00:03:53.733 } 00:03:53.733 } 00:03:53.733 ] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "vmd", 00:03:53.733 "config": [] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "accel", 00:03:53.733 "config": [ 00:03:53.733 { 00:03:53.733 "method": "accel_set_options", 00:03:53.733 "params": { 00:03:53.733 "small_cache_size": 128, 00:03:53.733 "large_cache_size": 16, 00:03:53.733 "task_count": 2048, 00:03:53.733 "sequence_count": 2048, 00:03:53.733 "buf_count": 2048 00:03:53.733 } 00:03:53.733 } 00:03:53.733 ] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "bdev", 00:03:53.733 "config": [ 00:03:53.733 { 00:03:53.733 "method": "bdev_set_options", 00:03:53.733 "params": { 00:03:53.733 "bdev_io_pool_size": 65535, 00:03:53.733 "bdev_io_cache_size": 256, 00:03:53.733 "bdev_auto_examine": true, 00:03:53.733 "iobuf_small_cache_size": 128, 00:03:53.733 "iobuf_large_cache_size": 16 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "bdev_raid_set_options", 00:03:53.733 "params": { 00:03:53.733 "process_window_size_kb": 1024, 00:03:53.733 "process_max_bandwidth_mb_sec": 0 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "bdev_iscsi_set_options", 00:03:53.733 "params": { 00:03:53.733 "timeout_sec": 30 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "bdev_nvme_set_options", 00:03:53.733 "params": { 00:03:53.733 "action_on_timeout": "none", 00:03:53.733 "timeout_us": 0, 00:03:53.733 "timeout_admin_us": 0, 00:03:53.733 "keep_alive_timeout_ms": 10000, 00:03:53.733 "arbitration_burst": 0, 00:03:53.733 "low_priority_weight": 0, 00:03:53.733 "medium_priority_weight": 0, 00:03:53.733 "high_priority_weight": 0, 00:03:53.733 "nvme_adminq_poll_period_us": 10000, 00:03:53.733 "nvme_ioq_poll_period_us": 0, 00:03:53.733 "io_queue_requests": 0, 00:03:53.733 "delay_cmd_submit": true, 00:03:53.733 "transport_retry_count": 4, 00:03:53.733 "bdev_retry_count": 3, 00:03:53.733 "transport_ack_timeout": 0, 00:03:53.733 "ctrlr_loss_timeout_sec": 0, 00:03:53.733 "reconnect_delay_sec": 0, 00:03:53.733 "fast_io_fail_timeout_sec": 0, 00:03:53.733 "disable_auto_failback": false, 00:03:53.733 "generate_uuids": false, 00:03:53.733 "transport_tos": 0, 00:03:53.733 "nvme_error_stat": false, 00:03:53.733 "rdma_srq_size": 0, 00:03:53.733 "io_path_stat": false, 00:03:53.733 "allow_accel_sequence": false, 00:03:53.733 "rdma_max_cq_size": 0, 00:03:53.733 "rdma_cm_event_timeout_ms": 0, 00:03:53.733 "dhchap_digests": [ 00:03:53.733 "sha256", 00:03:53.733 "sha384", 00:03:53.733 "sha512" 00:03:53.733 ], 00:03:53.733 "dhchap_dhgroups": [ 00:03:53.733 "null", 00:03:53.733 "ffdhe2048", 00:03:53.733 "ffdhe3072", 00:03:53.733 "ffdhe4096", 00:03:53.733 "ffdhe6144", 00:03:53.733 "ffdhe8192" 00:03:53.733 ] 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "bdev_nvme_set_hotplug", 00:03:53.733 "params": { 00:03:53.733 "period_us": 100000, 00:03:53.733 "enable": false 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "bdev_wait_for_examine" 00:03:53.733 } 00:03:53.733 ] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "scsi", 00:03:53.733 "config": null 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "scheduler", 00:03:53.733 "config": [ 00:03:53.733 { 00:03:53.733 "method": "framework_set_scheduler", 00:03:53.733 "params": { 00:03:53.733 "name": "static" 00:03:53.733 } 00:03:53.733 } 00:03:53.733 ] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "vhost_scsi", 00:03:53.733 "config": [] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "vhost_blk", 00:03:53.733 "config": [] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "ublk", 00:03:53.733 "config": [] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "nbd", 00:03:53.733 "config": [] 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "subsystem": "nvmf", 00:03:53.733 "config": [ 00:03:53.733 { 00:03:53.733 "method": "nvmf_set_config", 00:03:53.733 "params": { 00:03:53.733 "discovery_filter": "match_any", 00:03:53.733 "admin_cmd_passthru": { 00:03:53.733 "identify_ctrlr": false 00:03:53.733 } 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "nvmf_set_max_subsystems", 00:03:53.733 "params": { 00:03:53.733 "max_subsystems": 1024 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "nvmf_set_crdt", 00:03:53.733 "params": { 00:03:53.733 "crdt1": 0, 00:03:53.733 "crdt2": 0, 00:03:53.733 "crdt3": 0 00:03:53.733 } 00:03:53.733 }, 00:03:53.733 { 00:03:53.733 "method": "nvmf_create_transport", 00:03:53.733 "params": { 00:03:53.733 "trtype": "TCP", 00:03:53.733 "max_queue_depth": 128, 00:03:53.733 "max_io_qpairs_per_ctrlr": 127, 00:03:53.733 "in_capsule_data_size": 4096, 00:03:53.733 "max_io_size": 131072, 00:03:53.734 "io_unit_size": 131072, 00:03:53.734 "max_aq_depth": 128, 00:03:53.734 "num_shared_buffers": 511, 00:03:53.734 "buf_cache_size": 4294967295, 00:03:53.734 "dif_insert_or_strip": false, 00:03:53.734 "zcopy": false, 00:03:53.734 "c2h_success": true, 00:03:53.734 "sock_priority": 0, 00:03:53.734 "abort_timeout_sec": 1, 00:03:53.734 "ack_timeout": 0, 00:03:53.734 "data_wr_pool_size": 0 00:03:53.734 } 00:03:53.734 } 00:03:53.734 ] 00:03:53.734 }, 00:03:53.734 { 00:03:53.734 "subsystem": "iscsi", 00:03:53.734 "config": [ 00:03:53.734 { 00:03:53.734 "method": "iscsi_set_options", 00:03:53.734 "params": { 00:03:53.734 "node_base": "iqn.2016-06.io.spdk", 00:03:53.734 "max_sessions": 128, 00:03:53.734 "max_connections_per_session": 2, 00:03:53.734 "max_queue_depth": 64, 00:03:53.734 "default_time2wait": 2, 00:03:53.734 "default_time2retain": 20, 00:03:53.734 "first_burst_length": 8192, 00:03:53.734 "immediate_data": true, 00:03:53.734 "allow_duplicated_isid": false, 00:03:53.734 "error_recovery_level": 0, 00:03:53.734 "nop_timeout": 60, 00:03:53.734 "nop_in_interval": 30, 00:03:53.734 "disable_chap": false, 00:03:53.734 "require_chap": false, 00:03:53.734 "mutual_chap": false, 00:03:53.734 "chap_group": 0, 00:03:53.734 "max_large_datain_per_connection": 64, 00:03:53.734 "max_r2t_per_connection": 4, 00:03:53.734 "pdu_pool_size": 36864, 00:03:53.734 "immediate_data_pool_size": 16384, 00:03:53.734 "data_out_pool_size": 2048 00:03:53.734 } 00:03:53.734 } 00:03:53.734 ] 00:03:53.734 } 00:03:53.734 ] 00:03:53.734 } 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1468356 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1468356 ']' 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1468356 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468356 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468356' 00:03:53.734 killing process with pid 1468356 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1468356 00:03:53.734 20:31:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1468356 00:03:54.299 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1468498 00:03:54.299 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.299 20:31:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1468498 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1468498 ']' 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1468498 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468498 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468498' 00:03:59.557 killing process with pid 1468498 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1468498 00:03:59.557 20:31:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1468498 00:03:59.557 20:31:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:59.557 20:31:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:59.557 00:03:59.557 real 0m6.628s 00:03:59.557 user 0m6.206s 00:03:59.557 sys 0m0.708s 00:03:59.557 20:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.557 20:31:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.557 ************************************ 00:03:59.557 END TEST skip_rpc_with_json 00:03:59.557 ************************************ 00:03:59.557 20:31:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:59.557 20:31:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.557 20:31:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.557 20:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.815 ************************************ 00:03:59.815 START TEST skip_rpc_with_delay 00:03:59.815 ************************************ 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.815 [2024-07-24 20:31:55.186207] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:59.815 [2024-07-24 20:31:55.186343] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:59.815 00:03:59.815 real 0m0.070s 00:03:59.815 user 0m0.045s 00:03:59.815 sys 0m0.025s 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.815 20:31:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:59.815 ************************************ 00:03:59.815 END TEST skip_rpc_with_delay 00:03:59.815 ************************************ 00:03:59.815 20:31:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:59.815 20:31:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:59.815 20:31:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:59.815 20:31:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.815 20:31:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.815 20:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.815 ************************************ 00:03:59.815 START TEST exit_on_failed_rpc_init 00:03:59.815 ************************************ 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1469214 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1469214 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1469214 ']' 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:59.815 20:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.815 [2024-07-24 20:31:55.299363] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:03:59.815 [2024-07-24 20:31:55.299438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469214 ] 00:03:59.815 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.815 [2024-07-24 20:31:55.360074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.073 [2024-07-24 20:31:55.475940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.005 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.005 [2024-07-24 20:31:56.273191] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:01.005 [2024-07-24 20:31:56.273298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469260 ] 00:04:01.005 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.005 [2024-07-24 20:31:56.334002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.005 [2024-07-24 20:31:56.453831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.005 [2024-07-24 20:31:56.453942] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:01.005 [2024-07-24 20:31:56.453964] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:01.005 [2024-07-24 20:31:56.453977] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1469214 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1469214 ']' 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1469214 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469214 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469214' 00:04:01.263 killing process with pid 1469214 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1469214 00:04:01.263 20:31:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1469214 00:04:01.548 00:04:01.549 real 0m1.822s 00:04:01.549 user 0m2.164s 00:04:01.549 sys 0m0.496s 00:04:01.549 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.549 20:31:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.549 ************************************ 00:04:01.549 END TEST exit_on_failed_rpc_init 00:04:01.549 ************************************ 00:04:01.549 20:31:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.807 00:04:01.807 real 0m14.244s 00:04:01.807 user 0m13.666s 00:04:01.807 sys 0m1.721s 00:04:01.807 20:31:57 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.807 20:31:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.807 ************************************ 00:04:01.807 END TEST skip_rpc 00:04:01.807 ************************************ 00:04:01.807 20:31:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.807 20:31:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.807 20:31:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.807 20:31:57 -- common/autotest_common.sh@10 -- # set +x 00:04:01.807 ************************************ 00:04:01.807 START TEST rpc_client 00:04:01.807 ************************************ 00:04:01.807 20:31:57 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.807 * Looking for test storage... 00:04:01.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:01.807 20:31:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:01.807 OK 00:04:01.807 20:31:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:01.807 00:04:01.807 real 0m0.066s 00:04:01.807 user 0m0.025s 00:04:01.807 sys 0m0.045s 00:04:01.807 20:31:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.807 20:31:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:01.807 ************************************ 00:04:01.807 END TEST rpc_client 00:04:01.807 ************************************ 00:04:01.807 20:31:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:01.807 20:31:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.807 20:31:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.807 20:31:57 -- common/autotest_common.sh@10 -- # set +x 00:04:01.807 ************************************ 00:04:01.807 START TEST json_config 00:04:01.807 ************************************ 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:01.807 20:31:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:01.807 20:31:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.807 20:31:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.807 20:31:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.807 20:31:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.807 20:31:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.807 20:31:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:01.807 20:31:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@47 -- # : 0 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:01.807 20:31:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:01.807 INFO: JSON configuration test init 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.807 20:31:57 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:01.807 20:31:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:01.807 20:31:57 json_config -- json_config/common.sh@10 -- # shift 00:04:01.807 20:31:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.807 20:31:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.807 20:31:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.807 20:31:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.807 20:31:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.807 20:31:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1469480 00:04:01.807 20:31:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:01.807 20:31:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.807 Waiting for target to run... 00:04:01.807 20:31:57 json_config -- json_config/common.sh@25 -- # waitforlisten 1469480 /var/tmp/spdk_tgt.sock 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 1469480 ']' 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:01.807 20:31:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.807 [2024-07-24 20:31:57.362668] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:01.807 [2024-07-24 20:31:57.362763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469480 ] 00:04:02.066 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.324 [2024-07-24 20:31:57.854303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.581 [2024-07-24 20:31:57.961639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.838 20:31:58 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:02.838 20:31:58 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:02.838 20:31:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:02.838 00:04:02.838 20:31:58 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:02.838 20:31:58 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:02.838 20:31:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.838 20:31:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.838 20:31:58 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:02.838 20:31:58 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:02.838 20:31:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.838 20:31:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.838 20:31:58 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:02.838 20:31:58 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:02.838 20:31:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:06.116 20:32:01 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:06.116 20:32:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:06.116 20:32:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.116 20:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.116 20:32:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:06.116 20:32:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:06.116 20:32:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:06.116 20:32:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:06.116 20:32:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:06.116 20:32:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@51 -- # sort 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:06.373 20:32:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.373 20:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:06.373 20:32:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.373 20:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:06.373 20:32:01 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.373 20:32:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.630 MallocForNvmf0 00:04:06.630 20:32:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.630 20:32:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.889 MallocForNvmf1 00:04:06.889 20:32:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:06.889 20:32:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.146 [2024-07-24 20:32:02.514927] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.146 20:32:02 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.146 20:32:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.403 20:32:02 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.403 20:32:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.661 20:32:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.661 20:32:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.918 20:32:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.918 20:32:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.175 [2024-07-24 20:32:03.486124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.175 20:32:03 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:08.175 20:32:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.175 20:32:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.175 20:32:03 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:08.175 20:32:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.175 20:32:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.175 20:32:03 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:08.175 20:32:03 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.175 20:32:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.432 MallocBdevForConfigChangeCheck 00:04:08.432 20:32:03 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:08.432 20:32:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.432 20:32:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.432 20:32:03 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:08.432 20:32:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.690 20:32:04 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:08.690 INFO: shutting down applications... 00:04:08.690 20:32:04 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:08.690 20:32:04 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:08.690 20:32:04 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:08.690 20:32:04 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:10.585 Calling clear_iscsi_subsystem 00:04:10.585 Calling clear_nvmf_subsystem 00:04:10.585 Calling clear_nbd_subsystem 00:04:10.585 Calling clear_ublk_subsystem 00:04:10.585 Calling clear_vhost_blk_subsystem 00:04:10.585 Calling clear_vhost_scsi_subsystem 00:04:10.585 Calling clear_bdev_subsystem 00:04:10.585 20:32:05 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:10.585 20:32:05 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:10.586 20:32:05 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:10.586 20:32:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.586 20:32:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:10.586 20:32:05 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:10.843 20:32:06 json_config -- json_config/json_config.sh@349 -- # break 00:04:10.843 20:32:06 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:10.843 20:32:06 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:10.843 20:32:06 json_config -- json_config/common.sh@31 -- # local app=target 00:04:10.843 20:32:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:10.843 20:32:06 json_config -- json_config/common.sh@35 -- # [[ -n 1469480 ]] 00:04:10.843 20:32:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1469480 00:04:10.843 20:32:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:10.843 20:32:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.843 20:32:06 json_config -- json_config/common.sh@41 -- # kill -0 1469480 00:04:10.843 20:32:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.409 20:32:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.409 20:32:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.409 20:32:06 json_config -- json_config/common.sh@41 -- # kill -0 1469480 00:04:11.409 20:32:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.409 20:32:06 json_config -- json_config/common.sh@43 -- # break 00:04:11.409 20:32:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.409 20:32:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.409 SPDK target shutdown done 00:04:11.409 20:32:06 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:11.409 INFO: relaunching applications... 00:04:11.409 20:32:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.409 20:32:06 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.409 20:32:06 json_config -- json_config/common.sh@10 -- # shift 00:04:11.409 20:32:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.409 20:32:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.409 20:32:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.409 20:32:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.409 20:32:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.409 20:32:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1470794 00:04:11.409 20:32:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.409 20:32:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.409 Waiting for target to run... 00:04:11.409 20:32:06 json_config -- json_config/common.sh@25 -- # waitforlisten 1470794 /var/tmp/spdk_tgt.sock 00:04:11.409 20:32:06 json_config -- common/autotest_common.sh@831 -- # '[' -z 1470794 ']' 00:04:11.409 20:32:06 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.409 20:32:06 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:11.409 20:32:06 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.409 20:32:06 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:11.409 20:32:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.409 [2024-07-24 20:32:06.755805] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:11.409 [2024-07-24 20:32:06.755914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470794 ] 00:04:11.409 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.975 [2024-07-24 20:32:07.251829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.975 [2024-07-24 20:32:07.359481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.253 [2024-07-24 20:32:10.404113] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.253 [2024-07-24 20:32:10.436616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:15.253 20:32:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.253 20:32:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:15.253 20:32:10 json_config -- json_config/common.sh@26 -- # echo '' 00:04:15.253 00:04:15.253 20:32:10 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:15.253 20:32:10 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:15.253 INFO: Checking if target configuration is the same... 00:04:15.253 20:32:10 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.253 20:32:10 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:15.253 20:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.253 + '[' 2 -ne 2 ']' 00:04:15.253 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:15.253 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:15.253 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.253 +++ basename /dev/fd/62 00:04:15.253 ++ mktemp /tmp/62.XXX 00:04:15.253 + tmp_file_1=/tmp/62.3rT 00:04:15.253 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.253 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:15.253 + tmp_file_2=/tmp/spdk_tgt_config.json.zzC 00:04:15.253 + ret=0 00:04:15.253 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.510 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.510 + diff -u /tmp/62.3rT /tmp/spdk_tgt_config.json.zzC 00:04:15.510 + echo 'INFO: JSON config files are the same' 00:04:15.510 INFO: JSON config files are the same 00:04:15.510 + rm /tmp/62.3rT /tmp/spdk_tgt_config.json.zzC 00:04:15.510 + exit 0 00:04:15.510 20:32:10 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:15.510 20:32:10 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:15.510 INFO: changing configuration and checking if this can be detected... 00:04:15.510 20:32:10 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:15.510 20:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:15.767 20:32:11 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.767 20:32:11 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:15.767 20:32:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.767 + '[' 2 -ne 2 ']' 00:04:15.767 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:15.767 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:15.767 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.767 +++ basename /dev/fd/62 00:04:15.767 ++ mktemp /tmp/62.XXX 00:04:15.767 + tmp_file_1=/tmp/62.YGI 00:04:15.767 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.767 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:15.767 + tmp_file_2=/tmp/spdk_tgt_config.json.IPi 00:04:15.767 + ret=0 00:04:15.767 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.024 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.024 + diff -u /tmp/62.YGI /tmp/spdk_tgt_config.json.IPi 00:04:16.024 + ret=1 00:04:16.024 + echo '=== Start of file: /tmp/62.YGI ===' 00:04:16.024 + cat /tmp/62.YGI 00:04:16.282 + echo '=== End of file: /tmp/62.YGI ===' 00:04:16.282 + echo '' 00:04:16.282 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IPi ===' 00:04:16.282 + cat /tmp/spdk_tgt_config.json.IPi 00:04:16.282 + echo '=== End of file: /tmp/spdk_tgt_config.json.IPi ===' 00:04:16.282 + echo '' 00:04:16.282 + rm /tmp/62.YGI /tmp/spdk_tgt_config.json.IPi 00:04:16.282 + exit 1 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:16.282 INFO: configuration change detected. 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@321 -- # [[ -n 1470794 ]] 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.282 20:32:11 json_config -- json_config/json_config.sh@327 -- # killprocess 1470794 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@950 -- # '[' -z 1470794 ']' 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@954 -- # kill -0 1470794 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@955 -- # uname 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470794 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470794' 00:04:16.282 killing process with pid 1470794 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@969 -- # kill 1470794 00:04:16.282 20:32:11 json_config -- common/autotest_common.sh@974 -- # wait 1470794 00:04:18.179 20:32:13 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.179 20:32:13 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:18.179 20:32:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.179 20:32:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.179 20:32:13 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:18.179 20:32:13 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:18.179 INFO: Success 00:04:18.179 00:04:18.179 real 0m16.070s 00:04:18.179 user 0m17.831s 00:04:18.179 sys 0m2.132s 00:04:18.179 20:32:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.179 20:32:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.179 ************************************ 00:04:18.179 END TEST json_config 00:04:18.179 ************************************ 00:04:18.179 20:32:13 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:18.179 20:32:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.179 20:32:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.179 20:32:13 -- common/autotest_common.sh@10 -- # set +x 00:04:18.179 ************************************ 00:04:18.179 START TEST json_config_extra_key 00:04:18.179 ************************************ 00:04:18.179 20:32:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:18.179 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.179 20:32:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:18.180 20:32:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.180 20:32:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.180 20:32:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.180 20:32:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.180 20:32:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.180 20:32:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.180 20:32:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:18.180 20:32:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:18.180 20:32:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:18.180 INFO: launching applications... 00:04:18.180 20:32:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1471708 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.180 Waiting for target to run... 00:04:18.180 20:32:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1471708 /var/tmp/spdk_tgt.sock 00:04:18.180 20:32:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1471708 ']' 00:04:18.180 20:32:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.180 20:32:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.180 20:32:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.180 20:32:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.180 20:32:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:18.180 [2024-07-24 20:32:13.473692] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:18.180 [2024-07-24 20:32:13.473775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471708 ] 00:04:18.180 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.438 [2024-07-24 20:32:13.809951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.438 [2024-07-24 20:32:13.899459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.033 20:32:14 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:19.033 20:32:14 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:19.033 00:04:19.033 20:32:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:19.033 INFO: shutting down applications... 00:04:19.033 20:32:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1471708 ]] 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1471708 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1471708 00:04:19.033 20:32:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.603 20:32:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.603 20:32:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.603 20:32:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1471708 00:04:19.603 20:32:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:19.603 20:32:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:19.603 20:32:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:19.603 20:32:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:19.603 SPDK target shutdown done 00:04:19.603 20:32:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:19.603 Success 00:04:19.603 00:04:19.603 real 0m1.549s 00:04:19.603 user 0m1.568s 00:04:19.603 sys 0m0.431s 00:04:19.603 20:32:14 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.603 20:32:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:19.603 ************************************ 00:04:19.603 END TEST json_config_extra_key 00:04:19.603 ************************************ 00:04:19.603 20:32:14 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:19.603 20:32:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.603 20:32:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.603 20:32:14 -- common/autotest_common.sh@10 -- # set +x 00:04:19.603 ************************************ 00:04:19.603 START TEST alias_rpc 00:04:19.603 ************************************ 00:04:19.603 20:32:14 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:19.603 * Looking for test storage... 00:04:19.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:19.603 20:32:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:19.603 20:32:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1471896 00:04:19.603 20:32:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.603 20:32:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1471896 00:04:19.603 20:32:15 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1471896 ']' 00:04:19.603 20:32:15 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.603 20:32:15 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:19.603 20:32:15 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.603 20:32:15 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:19.603 20:32:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.603 [2024-07-24 20:32:15.076797] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:19.603 [2024-07-24 20:32:15.076890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471896 ] 00:04:19.603 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.603 [2024-07-24 20:32:15.141324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.861 [2024-07-24 20:32:15.258498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.118 20:32:15 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.118 20:32:15 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:20.118 20:32:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:20.376 20:32:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1471896 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1471896 ']' 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1471896 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471896 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471896' 00:04:20.376 killing process with pid 1471896 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@969 -- # kill 1471896 00:04:20.376 20:32:15 alias_rpc -- common/autotest_common.sh@974 -- # wait 1471896 00:04:20.940 00:04:20.940 real 0m1.306s 00:04:20.940 user 0m1.393s 00:04:20.940 sys 0m0.448s 00:04:20.940 20:32:16 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.940 20:32:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.940 ************************************ 00:04:20.940 END TEST alias_rpc 00:04:20.940 ************************************ 00:04:20.940 20:32:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:20.940 20:32:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:20.941 20:32:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.941 20:32:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.941 20:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:20.941 ************************************ 00:04:20.941 START TEST spdkcli_tcp 00:04:20.941 ************************************ 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:20.941 * Looking for test storage... 00:04:20.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1472090 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:20.941 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1472090 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1472090 ']' 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.941 20:32:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.941 [2024-07-24 20:32:16.424724] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:20.941 [2024-07-24 20:32:16.424805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472090 ] 00:04:20.941 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.941 [2024-07-24 20:32:16.485589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.198 [2024-07-24 20:32:16.599218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.198 [2024-07-24 20:32:16.599222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.455 20:32:16 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.455 20:32:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:21.455 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1472214 00:04:21.455 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:21.455 20:32:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:21.713 [ 00:04:21.713 "bdev_malloc_delete", 00:04:21.713 "bdev_malloc_create", 00:04:21.713 "bdev_null_resize", 00:04:21.713 "bdev_null_delete", 00:04:21.713 "bdev_null_create", 00:04:21.713 "bdev_nvme_cuse_unregister", 00:04:21.713 "bdev_nvme_cuse_register", 00:04:21.713 "bdev_opal_new_user", 00:04:21.713 "bdev_opal_set_lock_state", 00:04:21.713 "bdev_opal_delete", 00:04:21.713 "bdev_opal_get_info", 00:04:21.713 "bdev_opal_create", 00:04:21.713 "bdev_nvme_opal_revert", 00:04:21.713 "bdev_nvme_opal_init", 00:04:21.713 "bdev_nvme_send_cmd", 00:04:21.713 "bdev_nvme_get_path_iostat", 00:04:21.713 "bdev_nvme_get_mdns_discovery_info", 00:04:21.713 "bdev_nvme_stop_mdns_discovery", 00:04:21.713 "bdev_nvme_start_mdns_discovery", 00:04:21.713 "bdev_nvme_set_multipath_policy", 00:04:21.713 "bdev_nvme_set_preferred_path", 00:04:21.713 "bdev_nvme_get_io_paths", 00:04:21.713 "bdev_nvme_remove_error_injection", 00:04:21.713 "bdev_nvme_add_error_injection", 00:04:21.713 "bdev_nvme_get_discovery_info", 00:04:21.713 "bdev_nvme_stop_discovery", 00:04:21.713 "bdev_nvme_start_discovery", 00:04:21.713 "bdev_nvme_get_controller_health_info", 00:04:21.713 "bdev_nvme_disable_controller", 00:04:21.713 "bdev_nvme_enable_controller", 00:04:21.713 "bdev_nvme_reset_controller", 00:04:21.713 "bdev_nvme_get_transport_statistics", 00:04:21.713 "bdev_nvme_apply_firmware", 00:04:21.713 "bdev_nvme_detach_controller", 00:04:21.713 "bdev_nvme_get_controllers", 00:04:21.713 "bdev_nvme_attach_controller", 00:04:21.713 "bdev_nvme_set_hotplug", 00:04:21.713 "bdev_nvme_set_options", 00:04:21.713 "bdev_passthru_delete", 00:04:21.713 "bdev_passthru_create", 00:04:21.713 "bdev_lvol_set_parent_bdev", 00:04:21.713 "bdev_lvol_set_parent", 00:04:21.713 "bdev_lvol_check_shallow_copy", 00:04:21.713 "bdev_lvol_start_shallow_copy", 00:04:21.713 "bdev_lvol_grow_lvstore", 00:04:21.713 "bdev_lvol_get_lvols", 00:04:21.713 "bdev_lvol_get_lvstores", 00:04:21.713 "bdev_lvol_delete", 00:04:21.713 "bdev_lvol_set_read_only", 00:04:21.713 "bdev_lvol_resize", 00:04:21.713 "bdev_lvol_decouple_parent", 00:04:21.713 "bdev_lvol_inflate", 00:04:21.713 "bdev_lvol_rename", 00:04:21.713 "bdev_lvol_clone_bdev", 00:04:21.713 "bdev_lvol_clone", 00:04:21.713 "bdev_lvol_snapshot", 00:04:21.713 "bdev_lvol_create", 00:04:21.713 "bdev_lvol_delete_lvstore", 00:04:21.713 "bdev_lvol_rename_lvstore", 00:04:21.713 "bdev_lvol_create_lvstore", 00:04:21.713 "bdev_raid_set_options", 00:04:21.713 "bdev_raid_remove_base_bdev", 00:04:21.713 "bdev_raid_add_base_bdev", 00:04:21.713 "bdev_raid_delete", 00:04:21.713 "bdev_raid_create", 00:04:21.713 "bdev_raid_get_bdevs", 00:04:21.713 "bdev_error_inject_error", 00:04:21.713 "bdev_error_delete", 00:04:21.713 "bdev_error_create", 00:04:21.713 "bdev_split_delete", 00:04:21.713 "bdev_split_create", 00:04:21.713 "bdev_delay_delete", 00:04:21.713 "bdev_delay_create", 00:04:21.713 "bdev_delay_update_latency", 00:04:21.713 "bdev_zone_block_delete", 00:04:21.713 "bdev_zone_block_create", 00:04:21.713 "blobfs_create", 00:04:21.713 "blobfs_detect", 00:04:21.713 "blobfs_set_cache_size", 00:04:21.713 "bdev_aio_delete", 00:04:21.713 "bdev_aio_rescan", 00:04:21.713 "bdev_aio_create", 00:04:21.713 "bdev_ftl_set_property", 00:04:21.713 "bdev_ftl_get_properties", 00:04:21.713 "bdev_ftl_get_stats", 00:04:21.713 "bdev_ftl_unmap", 00:04:21.713 "bdev_ftl_unload", 00:04:21.713 "bdev_ftl_delete", 00:04:21.713 "bdev_ftl_load", 00:04:21.713 "bdev_ftl_create", 00:04:21.713 "bdev_virtio_attach_controller", 00:04:21.713 "bdev_virtio_scsi_get_devices", 00:04:21.713 "bdev_virtio_detach_controller", 00:04:21.713 "bdev_virtio_blk_set_hotplug", 00:04:21.713 "bdev_iscsi_delete", 00:04:21.713 "bdev_iscsi_create", 00:04:21.713 "bdev_iscsi_set_options", 00:04:21.713 "accel_error_inject_error", 00:04:21.713 "ioat_scan_accel_module", 00:04:21.713 "dsa_scan_accel_module", 00:04:21.713 "iaa_scan_accel_module", 00:04:21.713 "vfu_virtio_create_scsi_endpoint", 00:04:21.713 "vfu_virtio_scsi_remove_target", 00:04:21.713 "vfu_virtio_scsi_add_target", 00:04:21.713 "vfu_virtio_create_blk_endpoint", 00:04:21.713 "vfu_virtio_delete_endpoint", 00:04:21.713 "keyring_file_remove_key", 00:04:21.713 "keyring_file_add_key", 00:04:21.713 "keyring_linux_set_options", 00:04:21.713 "iscsi_get_histogram", 00:04:21.713 "iscsi_enable_histogram", 00:04:21.713 "iscsi_set_options", 00:04:21.713 "iscsi_get_auth_groups", 00:04:21.713 "iscsi_auth_group_remove_secret", 00:04:21.713 "iscsi_auth_group_add_secret", 00:04:21.713 "iscsi_delete_auth_group", 00:04:21.713 "iscsi_create_auth_group", 00:04:21.713 "iscsi_set_discovery_auth", 00:04:21.713 "iscsi_get_options", 00:04:21.713 "iscsi_target_node_request_logout", 00:04:21.713 "iscsi_target_node_set_redirect", 00:04:21.713 "iscsi_target_node_set_auth", 00:04:21.713 "iscsi_target_node_add_lun", 00:04:21.713 "iscsi_get_stats", 00:04:21.713 "iscsi_get_connections", 00:04:21.713 "iscsi_portal_group_set_auth", 00:04:21.713 "iscsi_start_portal_group", 00:04:21.713 "iscsi_delete_portal_group", 00:04:21.713 "iscsi_create_portal_group", 00:04:21.713 "iscsi_get_portal_groups", 00:04:21.713 "iscsi_delete_target_node", 00:04:21.713 "iscsi_target_node_remove_pg_ig_maps", 00:04:21.713 "iscsi_target_node_add_pg_ig_maps", 00:04:21.713 "iscsi_create_target_node", 00:04:21.713 "iscsi_get_target_nodes", 00:04:21.713 "iscsi_delete_initiator_group", 00:04:21.713 "iscsi_initiator_group_remove_initiators", 00:04:21.713 "iscsi_initiator_group_add_initiators", 00:04:21.714 "iscsi_create_initiator_group", 00:04:21.714 "iscsi_get_initiator_groups", 00:04:21.714 "nvmf_set_crdt", 00:04:21.714 "nvmf_set_config", 00:04:21.714 "nvmf_set_max_subsystems", 00:04:21.714 "nvmf_stop_mdns_prr", 00:04:21.714 "nvmf_publish_mdns_prr", 00:04:21.714 "nvmf_subsystem_get_listeners", 00:04:21.714 "nvmf_subsystem_get_qpairs", 00:04:21.714 "nvmf_subsystem_get_controllers", 00:04:21.714 "nvmf_get_stats", 00:04:21.714 "nvmf_get_transports", 00:04:21.714 "nvmf_create_transport", 00:04:21.714 "nvmf_get_targets", 00:04:21.714 "nvmf_delete_target", 00:04:21.714 "nvmf_create_target", 00:04:21.714 "nvmf_subsystem_allow_any_host", 00:04:21.714 "nvmf_subsystem_remove_host", 00:04:21.714 "nvmf_subsystem_add_host", 00:04:21.714 "nvmf_ns_remove_host", 00:04:21.714 "nvmf_ns_add_host", 00:04:21.714 "nvmf_subsystem_remove_ns", 00:04:21.714 "nvmf_subsystem_add_ns", 00:04:21.714 "nvmf_subsystem_listener_set_ana_state", 00:04:21.714 "nvmf_discovery_get_referrals", 00:04:21.714 "nvmf_discovery_remove_referral", 00:04:21.714 "nvmf_discovery_add_referral", 00:04:21.714 "nvmf_subsystem_remove_listener", 00:04:21.714 "nvmf_subsystem_add_listener", 00:04:21.714 "nvmf_delete_subsystem", 00:04:21.714 "nvmf_create_subsystem", 00:04:21.714 "nvmf_get_subsystems", 00:04:21.714 "env_dpdk_get_mem_stats", 00:04:21.714 "nbd_get_disks", 00:04:21.714 "nbd_stop_disk", 00:04:21.714 "nbd_start_disk", 00:04:21.714 "ublk_recover_disk", 00:04:21.714 "ublk_get_disks", 00:04:21.714 "ublk_stop_disk", 00:04:21.714 "ublk_start_disk", 00:04:21.714 "ublk_destroy_target", 00:04:21.714 "ublk_create_target", 00:04:21.714 "virtio_blk_create_transport", 00:04:21.714 "virtio_blk_get_transports", 00:04:21.714 "vhost_controller_set_coalescing", 00:04:21.714 "vhost_get_controllers", 00:04:21.714 "vhost_delete_controller", 00:04:21.714 "vhost_create_blk_controller", 00:04:21.714 "vhost_scsi_controller_remove_target", 00:04:21.714 "vhost_scsi_controller_add_target", 00:04:21.714 "vhost_start_scsi_controller", 00:04:21.714 "vhost_create_scsi_controller", 00:04:21.714 "thread_set_cpumask", 00:04:21.714 "framework_get_governor", 00:04:21.714 "framework_get_scheduler", 00:04:21.714 "framework_set_scheduler", 00:04:21.714 "framework_get_reactors", 00:04:21.714 "thread_get_io_channels", 00:04:21.714 "thread_get_pollers", 00:04:21.714 "thread_get_stats", 00:04:21.714 "framework_monitor_context_switch", 00:04:21.714 "spdk_kill_instance", 00:04:21.714 "log_enable_timestamps", 00:04:21.714 "log_get_flags", 00:04:21.714 "log_clear_flag", 00:04:21.714 "log_set_flag", 00:04:21.714 "log_get_level", 00:04:21.714 "log_set_level", 00:04:21.714 "log_get_print_level", 00:04:21.714 "log_set_print_level", 00:04:21.714 "framework_enable_cpumask_locks", 00:04:21.714 "framework_disable_cpumask_locks", 00:04:21.714 "framework_wait_init", 00:04:21.714 "framework_start_init", 00:04:21.714 "scsi_get_devices", 00:04:21.714 "bdev_get_histogram", 00:04:21.714 "bdev_enable_histogram", 00:04:21.714 "bdev_set_qos_limit", 00:04:21.714 "bdev_set_qd_sampling_period", 00:04:21.714 "bdev_get_bdevs", 00:04:21.714 "bdev_reset_iostat", 00:04:21.714 "bdev_get_iostat", 00:04:21.714 "bdev_examine", 00:04:21.714 "bdev_wait_for_examine", 00:04:21.714 "bdev_set_options", 00:04:21.714 "notify_get_notifications", 00:04:21.714 "notify_get_types", 00:04:21.714 "accel_get_stats", 00:04:21.714 "accel_set_options", 00:04:21.714 "accel_set_driver", 00:04:21.714 "accel_crypto_key_destroy", 00:04:21.714 "accel_crypto_keys_get", 00:04:21.714 "accel_crypto_key_create", 00:04:21.714 "accel_assign_opc", 00:04:21.714 "accel_get_module_info", 00:04:21.714 "accel_get_opc_assignments", 00:04:21.714 "vmd_rescan", 00:04:21.714 "vmd_remove_device", 00:04:21.714 "vmd_enable", 00:04:21.714 "sock_get_default_impl", 00:04:21.714 "sock_set_default_impl", 00:04:21.714 "sock_impl_set_options", 00:04:21.714 "sock_impl_get_options", 00:04:21.714 "iobuf_get_stats", 00:04:21.714 "iobuf_set_options", 00:04:21.714 "keyring_get_keys", 00:04:21.714 "framework_get_pci_devices", 00:04:21.714 "framework_get_config", 00:04:21.714 "framework_get_subsystems", 00:04:21.714 "vfu_tgt_set_base_path", 00:04:21.714 "trace_get_info", 00:04:21.714 "trace_get_tpoint_group_mask", 00:04:21.714 "trace_disable_tpoint_group", 00:04:21.714 "trace_enable_tpoint_group", 00:04:21.714 "trace_clear_tpoint_mask", 00:04:21.714 "trace_set_tpoint_mask", 00:04:21.714 "spdk_get_version", 00:04:21.714 "rpc_get_methods" 00:04:21.714 ] 00:04:21.714 20:32:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:21.714 20:32:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:21.714 20:32:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1472090 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1472090 ']' 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1472090 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472090 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472090' 00:04:21.714 killing process with pid 1472090 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1472090 00:04:21.714 20:32:17 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1472090 00:04:22.279 00:04:22.279 real 0m1.298s 00:04:22.279 user 0m2.243s 00:04:22.279 sys 0m0.465s 00:04:22.279 20:32:17 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.279 20:32:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.279 ************************************ 00:04:22.279 END TEST spdkcli_tcp 00:04:22.279 ************************************ 00:04:22.279 20:32:17 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:22.279 20:32:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.279 20:32:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.279 20:32:17 -- common/autotest_common.sh@10 -- # set +x 00:04:22.279 ************************************ 00:04:22.279 START TEST dpdk_mem_utility 00:04:22.279 ************************************ 00:04:22.279 20:32:17 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:22.279 * Looking for test storage... 00:04:22.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:22.279 20:32:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:22.279 20:32:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1472410 00:04:22.279 20:32:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.279 20:32:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1472410 00:04:22.279 20:32:17 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1472410 ']' 00:04:22.279 20:32:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.279 20:32:17 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.279 20:32:17 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.279 20:32:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.279 20:32:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:22.279 [2024-07-24 20:32:17.763850] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:22.279 [2024-07-24 20:32:17.763949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472410 ] 00:04:22.279 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.279 [2024-07-24 20:32:17.821848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.536 [2024-07-24 20:32:17.928390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.794 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.794 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:22.794 20:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:22.794 20:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:22.794 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.794 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:22.794 { 00:04:22.794 "filename": "/tmp/spdk_mem_dump.txt" 00:04:22.794 } 00:04:22.794 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.794 20:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:22.794 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:22.794 1 heaps totaling size 814.000000 MiB 00:04:22.794 size: 814.000000 MiB heap id: 0 00:04:22.794 end heaps---------- 00:04:22.794 8 mempools totaling size 598.116089 MiB 00:04:22.794 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:22.794 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:22.794 size: 84.521057 MiB name: bdev_io_1472410 00:04:22.794 size: 51.011292 MiB name: evtpool_1472410 00:04:22.794 size: 50.003479 MiB name: msgpool_1472410 00:04:22.794 size: 21.763794 MiB name: PDU_Pool 00:04:22.794 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:22.794 size: 0.026123 MiB name: Session_Pool 00:04:22.794 end mempools------- 00:04:22.794 6 memzones totaling size 4.142822 MiB 00:04:22.794 size: 1.000366 MiB name: RG_ring_0_1472410 00:04:22.794 size: 1.000366 MiB name: RG_ring_1_1472410 00:04:22.794 size: 1.000366 MiB name: RG_ring_4_1472410 00:04:22.794 size: 1.000366 MiB name: RG_ring_5_1472410 00:04:22.794 size: 0.125366 MiB name: RG_ring_2_1472410 00:04:22.794 size: 0.015991 MiB name: RG_ring_3_1472410 00:04:22.794 end memzones------- 00:04:22.794 20:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:22.794 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:22.794 list of free elements. size: 12.519348 MiB 00:04:22.794 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:22.794 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:22.794 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:22.794 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:22.794 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:22.794 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:22.794 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:22.794 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:22.794 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:22.794 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:22.794 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:22.794 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:22.794 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:22.794 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:22.794 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:22.794 list of standard malloc elements. size: 199.218079 MiB 00:04:22.794 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:22.794 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:22.794 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:22.794 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:22.794 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:22.794 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:22.794 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:22.794 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:22.794 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:22.794 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:22.795 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:22.795 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:22.795 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:22.795 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:22.795 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:22.795 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:22.795 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:22.795 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:22.795 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:22.795 list of memzone associated elements. size: 602.262573 MiB 00:04:22.795 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:22.795 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:22.795 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:22.795 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:22.795 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:22.795 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1472410_0 00:04:22.795 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:22.795 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1472410_0 00:04:22.795 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:22.795 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1472410_0 00:04:22.795 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:22.795 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:22.795 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:22.795 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:22.795 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:22.795 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1472410 00:04:22.795 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:22.795 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1472410 00:04:22.795 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:22.795 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1472410 00:04:22.795 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:22.795 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:22.795 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:22.795 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:22.795 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:22.795 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:22.795 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:22.795 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:22.795 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:22.795 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1472410 00:04:22.795 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:22.795 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1472410 00:04:22.795 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:22.795 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1472410 00:04:22.795 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:22.795 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1472410 00:04:22.795 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:22.795 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1472410 00:04:22.795 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:22.795 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:22.795 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:22.795 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:22.795 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:22.795 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:22.795 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:22.795 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1472410 00:04:22.795 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:22.795 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:22.795 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:22.795 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:22.795 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:22.795 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1472410 00:04:22.795 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:22.795 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:22.795 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:22.795 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1472410 00:04:22.795 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:22.795 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1472410 00:04:22.795 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:22.795 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:22.795 20:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:22.795 20:32:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1472410 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1472410 ']' 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1472410 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472410 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472410' 00:04:22.795 killing process with pid 1472410 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1472410 00:04:22.795 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1472410 00:04:23.359 00:04:23.359 real 0m1.149s 00:04:23.359 user 0m1.116s 00:04:23.359 sys 0m0.401s 00:04:23.359 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.359 20:32:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.359 ************************************ 00:04:23.359 END TEST dpdk_mem_utility 00:04:23.359 ************************************ 00:04:23.359 20:32:18 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:23.359 20:32:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.359 20:32:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.359 20:32:18 -- common/autotest_common.sh@10 -- # set +x 00:04:23.359 ************************************ 00:04:23.359 START TEST event 00:04:23.359 ************************************ 00:04:23.359 20:32:18 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:23.359 * Looking for test storage... 00:04:23.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:23.359 20:32:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:23.359 20:32:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:23.359 20:32:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.359 20:32:18 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:23.359 20:32:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.359 20:32:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.618 ************************************ 00:04:23.618 START TEST event_perf 00:04:23.618 ************************************ 00:04:23.618 20:32:18 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.618 Running I/O for 1 seconds...[2024-07-24 20:32:18.946093] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:23.618 [2024-07-24 20:32:18.946161] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472598 ] 00:04:23.618 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.618 [2024-07-24 20:32:19.009116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:23.618 [2024-07-24 20:32:19.128113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.618 [2024-07-24 20:32:19.128166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:23.618 [2024-07-24 20:32:19.128288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:23.618 [2024-07-24 20:32:19.128292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.990 Running I/O for 1 seconds... 00:04:24.990 lcore 0: 237883 00:04:24.990 lcore 1: 237882 00:04:24.990 lcore 2: 237882 00:04:24.990 lcore 3: 237883 00:04:24.990 done. 00:04:24.990 00:04:24.990 real 0m1.322s 00:04:24.990 user 0m4.229s 00:04:24.990 sys 0m0.087s 00:04:24.990 20:32:20 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.990 20:32:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.990 ************************************ 00:04:24.990 END TEST event_perf 00:04:24.990 ************************************ 00:04:24.990 20:32:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:24.990 20:32:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:24.990 20:32:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.990 20:32:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.990 ************************************ 00:04:24.990 START TEST event_reactor 00:04:24.990 ************************************ 00:04:24.990 20:32:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:24.990 [2024-07-24 20:32:20.313903] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:24.990 [2024-07-24 20:32:20.313973] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472755 ] 00:04:24.990 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.990 [2024-07-24 20:32:20.375156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.990 [2024-07-24 20:32:20.493971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.361 test_start 00:04:26.361 oneshot 00:04:26.361 tick 100 00:04:26.361 tick 100 00:04:26.361 tick 250 00:04:26.361 tick 100 00:04:26.361 tick 100 00:04:26.361 tick 100 00:04:26.361 tick 250 00:04:26.361 tick 500 00:04:26.361 tick 100 00:04:26.361 tick 100 00:04:26.361 tick 250 00:04:26.361 tick 100 00:04:26.361 tick 100 00:04:26.361 test_end 00:04:26.361 00:04:26.361 real 0m1.316s 00:04:26.362 user 0m1.228s 00:04:26.362 sys 0m0.083s 00:04:26.362 20:32:21 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.362 20:32:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:26.362 ************************************ 00:04:26.362 END TEST event_reactor 00:04:26.362 ************************************ 00:04:26.362 20:32:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:26.362 20:32:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:26.362 20:32:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.362 20:32:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.362 ************************************ 00:04:26.362 START TEST event_reactor_perf 00:04:26.362 ************************************ 00:04:26.362 20:32:21 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:26.362 [2024-07-24 20:32:21.668989] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:26.362 [2024-07-24 20:32:21.669041] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472915 ] 00:04:26.362 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.362 [2024-07-24 20:32:21.729763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.362 [2024-07-24 20:32:21.847653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.734 test_start 00:04:27.734 test_end 00:04:27.734 Performance: 362452 events per second 00:04:27.734 00:04:27.734 real 0m1.311s 00:04:27.734 user 0m1.224s 00:04:27.734 sys 0m0.083s 00:04:27.734 20:32:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.734 20:32:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:27.734 ************************************ 00:04:27.734 END TEST event_reactor_perf 00:04:27.734 ************************************ 00:04:27.734 20:32:22 event -- event/event.sh@49 -- # uname -s 00:04:27.734 20:32:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:27.734 20:32:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:27.734 20:32:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.734 20:32:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.734 20:32:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.734 ************************************ 00:04:27.734 START TEST event_scheduler 00:04:27.734 ************************************ 00:04:27.734 20:32:23 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:27.734 * Looking for test storage... 00:04:27.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:27.734 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:27.734 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1473098 00:04:27.734 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:27.734 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.734 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1473098 00:04:27.734 20:32:23 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1473098 ']' 00:04:27.734 20:32:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.734 20:32:23 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.734 20:32:23 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.734 20:32:23 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.734 20:32:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.734 [2024-07-24 20:32:23.113813] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:27.734 [2024-07-24 20:32:23.113897] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473098 ] 00:04:27.734 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.734 [2024-07-24 20:32:23.171582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:27.734 [2024-07-24 20:32:23.295223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.734 [2024-07-24 20:32:23.295292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.734 [2024-07-24 20:32:23.295323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:27.734 [2024-07-24 20:32:23.295326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:28.007 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.007 [2024-07-24 20:32:23.344096] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:28.007 [2024-07-24 20:32:23.344123] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:28.007 [2024-07-24 20:32:23.344140] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:28.007 [2024-07-24 20:32:23.344151] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:28.007 [2024-07-24 20:32:23.344161] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.007 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.007 [2024-07-24 20:32:23.442159] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.007 20:32:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.007 20:32:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.007 ************************************ 00:04:28.007 START TEST scheduler_create_thread 00:04:28.007 ************************************ 00:04:28.007 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:28.007 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:28.007 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.007 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.007 2 00:04:28.007 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 3 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 4 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 5 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 6 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 7 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 8 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 9 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 10 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.008 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.267 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.268 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:28.268 20:32:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:28.268 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.268 20:32:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.525 20:32:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.525 00:04:28.525 real 0m0.591s 00:04:28.525 user 0m0.011s 00:04:28.525 sys 0m0.002s 00:04:28.525 20:32:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.525 20:32:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.525 ************************************ 00:04:28.525 END TEST scheduler_create_thread 00:04:28.525 ************************************ 00:04:28.525 20:32:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:28.525 20:32:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1473098 00:04:28.525 20:32:24 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1473098 ']' 00:04:28.525 20:32:24 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1473098 00:04:28.525 20:32:24 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:28.525 20:32:24 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.525 20:32:24 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473098 00:04:28.783 20:32:24 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:28.783 20:32:24 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:28.783 20:32:24 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473098' 00:04:28.783 killing process with pid 1473098 00:04:28.783 20:32:24 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1473098 00:04:28.783 20:32:24 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1473098 00:04:29.041 [2024-07-24 20:32:24.542299] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:29.299 00:04:29.299 real 0m1.787s 00:04:29.299 user 0m2.250s 00:04:29.299 sys 0m0.357s 00:04:29.299 20:32:24 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.299 20:32:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.299 ************************************ 00:04:29.299 END TEST event_scheduler 00:04:29.299 ************************************ 00:04:29.299 20:32:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:29.299 20:32:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:29.299 20:32:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.299 20:32:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.299 20:32:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.299 ************************************ 00:04:29.299 START TEST app_repeat 00:04:29.299 ************************************ 00:04:29.299 20:32:24 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1473407 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1473407' 00:04:29.299 Process app_repeat pid: 1473407 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:29.299 spdk_app_start Round 0 00:04:29.299 20:32:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1473407 /var/tmp/spdk-nbd.sock 00:04:29.299 20:32:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1473407 ']' 00:04:29.299 20:32:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.299 20:32:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.299 20:32:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.299 20:32:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.557 20:32:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.557 [2024-07-24 20:32:24.884119] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:29.557 [2024-07-24 20:32:24.884187] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473407 ] 00:04:29.557 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.557 [2024-07-24 20:32:24.947209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.557 [2024-07-24 20:32:25.062526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.557 [2024-07-24 20:32:25.062532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.815 20:32:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.815 20:32:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:29.815 20:32:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.073 Malloc0 00:04:30.073 20:32:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.331 Malloc1 00:04:30.331 20:32:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.331 20:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.589 /dev/nbd0 00:04:30.589 20:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.589 20:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.589 1+0 records in 00:04:30.589 1+0 records out 00:04:30.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175748 s, 23.3 MB/s 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:30.589 20:32:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:30.589 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.589 20:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.589 20:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.847 /dev/nbd1 00:04:30.847 20:32:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.847 20:32:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.847 1+0 records in 00:04:30.847 1+0 records out 00:04:30.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178414 s, 23.0 MB/s 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:30.847 20:32:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:30.847 20:32:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.847 20:32:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.847 20:32:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.847 20:32:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.847 20:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.106 { 00:04:31.106 "nbd_device": "/dev/nbd0", 00:04:31.106 "bdev_name": "Malloc0" 00:04:31.106 }, 00:04:31.106 { 00:04:31.106 "nbd_device": "/dev/nbd1", 00:04:31.106 "bdev_name": "Malloc1" 00:04:31.106 } 00:04:31.106 ]' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.106 { 00:04:31.106 "nbd_device": "/dev/nbd0", 00:04:31.106 "bdev_name": "Malloc0" 00:04:31.106 }, 00:04:31.106 { 00:04:31.106 "nbd_device": "/dev/nbd1", 00:04:31.106 "bdev_name": "Malloc1" 00:04:31.106 } 00:04:31.106 ]' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.106 /dev/nbd1' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.106 /dev/nbd1' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.106 256+0 records in 00:04:31.106 256+0 records out 00:04:31.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404356 s, 259 MB/s 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.106 256+0 records in 00:04:31.106 256+0 records out 00:04:31.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245366 s, 42.7 MB/s 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.106 256+0 records in 00:04:31.106 256+0 records out 00:04:31.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261227 s, 40.1 MB/s 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.106 20:32:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.437 20:32:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.695 20:32:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.954 20:32:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.954 20:32:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.211 20:32:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.469 [2024-07-24 20:32:27.989015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.727 [2024-07-24 20:32:28.105117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.727 [2024-07-24 20:32:28.105117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.727 [2024-07-24 20:32:28.166750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.727 [2024-07-24 20:32:28.166832] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.310 20:32:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:35.310 20:32:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:35.310 spdk_app_start Round 1 00:04:35.310 20:32:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1473407 /var/tmp/spdk-nbd.sock 00:04:35.310 20:32:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1473407 ']' 00:04:35.310 20:32:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.310 20:32:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.310 20:32:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.310 20:32:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.310 20:32:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.567 20:32:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.567 20:32:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:35.567 20:32:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.825 Malloc0 00:04:35.825 20:32:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.083 Malloc1 00:04:36.083 20:32:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.083 20:32:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.341 /dev/nbd0 00:04:36.341 20:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.341 20:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.341 1+0 records in 00:04:36.341 1+0 records out 00:04:36.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256288 s, 16.0 MB/s 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.341 20:32:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.341 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.341 20:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.341 20:32:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.598 /dev/nbd1 00:04:36.598 20:32:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.598 20:32:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.598 1+0 records in 00:04:36.598 1+0 records out 00:04:36.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189369 s, 21.6 MB/s 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.598 20:32:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.598 20:32:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.598 20:32:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.598 20:32:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.598 20:32:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.598 20:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.856 { 00:04:36.856 "nbd_device": "/dev/nbd0", 00:04:36.856 "bdev_name": "Malloc0" 00:04:36.856 }, 00:04:36.856 { 00:04:36.856 "nbd_device": "/dev/nbd1", 00:04:36.856 "bdev_name": "Malloc1" 00:04:36.856 } 00:04:36.856 ]' 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.856 { 00:04:36.856 "nbd_device": "/dev/nbd0", 00:04:36.856 "bdev_name": "Malloc0" 00:04:36.856 }, 00:04:36.856 { 00:04:36.856 "nbd_device": "/dev/nbd1", 00:04:36.856 "bdev_name": "Malloc1" 00:04:36.856 } 00:04:36.856 ]' 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.856 /dev/nbd1' 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.856 /dev/nbd1' 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.856 256+0 records in 00:04:36.856 256+0 records out 00:04:36.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491028 s, 214 MB/s 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.856 256+0 records in 00:04:36.856 256+0 records out 00:04:36.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223668 s, 46.9 MB/s 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.856 20:32:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.856 256+0 records in 00:04:36.856 256+0 records out 00:04:36.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260766 s, 40.2 MB/s 00:04:37.113 20:32:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:37.113 20:32:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.113 20:32:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.113 20:32:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:37.113 20:32:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.113 20:32:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:37.113 20:32:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.114 20:32:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.371 20:32:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.628 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.628 20:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.628 20:32:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.628 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.628 20:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.628 20:32:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.628 20:32:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.629 20:32:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.629 20:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.629 20:32:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.629 20:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.886 20:32:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.886 20:32:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.143 20:32:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:38.400 [2024-07-24 20:32:33.813148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.400 [2024-07-24 20:32:33.928396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.400 [2024-07-24 20:32:33.928401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.658 [2024-07-24 20:32:33.991263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.658 [2024-07-24 20:32:33.991350] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.184 20:32:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:41.184 20:32:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:41.184 spdk_app_start Round 2 00:04:41.184 20:32:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1473407 /var/tmp/spdk-nbd.sock 00:04:41.184 20:32:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1473407 ']' 00:04:41.184 20:32:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.184 20:32:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.184 20:32:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.184 20:32:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.184 20:32:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.441 20:32:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.441 20:32:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:41.441 20:32:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.699 Malloc0 00:04:41.699 20:32:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.957 Malloc1 00:04:41.957 20:32:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.957 20:32:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.215 /dev/nbd0 00:04:42.215 20:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.215 20:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.215 1+0 records in 00:04:42.215 1+0 records out 00:04:42.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214307 s, 19.1 MB/s 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:42.215 20:32:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:42.215 20:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.215 20:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.215 20:32:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.473 /dev/nbd1 00:04:42.473 20:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.473 20:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.473 1+0 records in 00:04:42.473 1+0 records out 00:04:42.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018629 s, 22.0 MB/s 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:42.473 20:32:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:42.473 20:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.473 20:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.473 20:32:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.473 20:32:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.473 20:32:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:42.732 { 00:04:42.732 "nbd_device": "/dev/nbd0", 00:04:42.732 "bdev_name": "Malloc0" 00:04:42.732 }, 00:04:42.732 { 00:04:42.732 "nbd_device": "/dev/nbd1", 00:04:42.732 "bdev_name": "Malloc1" 00:04:42.732 } 00:04:42.732 ]' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:42.732 { 00:04:42.732 "nbd_device": "/dev/nbd0", 00:04:42.732 "bdev_name": "Malloc0" 00:04:42.732 }, 00:04:42.732 { 00:04:42.732 "nbd_device": "/dev/nbd1", 00:04:42.732 "bdev_name": "Malloc1" 00:04:42.732 } 00:04:42.732 ]' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:42.732 /dev/nbd1' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:42.732 /dev/nbd1' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:42.732 256+0 records in 00:04:42.732 256+0 records out 00:04:42.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506098 s, 207 MB/s 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:42.732 256+0 records in 00:04:42.732 256+0 records out 00:04:42.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241598 s, 43.4 MB/s 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:42.732 256+0 records in 00:04:42.732 256+0 records out 00:04:42.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257318 s, 40.8 MB/s 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.732 20:32:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.990 20:32:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.248 20:32:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.505 20:32:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.505 20:32:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.505 20:32:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.505 20:32:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.506 20:32:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.506 20:32:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.763 20:32:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.328 [2024-07-24 20:32:39.609379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.328 [2024-07-24 20:32:39.725743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.328 [2024-07-24 20:32:39.725743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.329 [2024-07-24 20:32:39.788206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.329 [2024-07-24 20:32:39.788307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.855 20:32:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1473407 /var/tmp/spdk-nbd.sock 00:04:46.855 20:32:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1473407 ']' 00:04:46.855 20:32:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.855 20:32:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.855 20:32:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.855 20:32:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.855 20:32:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:47.113 20:32:42 event.app_repeat -- event/event.sh@39 -- # killprocess 1473407 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1473407 ']' 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1473407 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473407 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473407' 00:04:47.113 killing process with pid 1473407 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1473407 00:04:47.113 20:32:42 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1473407 00:04:47.371 spdk_app_start is called in Round 0. 00:04:47.371 Shutdown signal received, stop current app iteration 00:04:47.371 Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 reinitialization... 00:04:47.372 spdk_app_start is called in Round 1. 00:04:47.372 Shutdown signal received, stop current app iteration 00:04:47.372 Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 reinitialization... 00:04:47.372 spdk_app_start is called in Round 2. 00:04:47.372 Shutdown signal received, stop current app iteration 00:04:47.372 Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 reinitialization... 00:04:47.372 spdk_app_start is called in Round 3. 00:04:47.372 Shutdown signal received, stop current app iteration 00:04:47.372 20:32:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:47.372 20:32:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:47.372 00:04:47.372 real 0m18.007s 00:04:47.372 user 0m38.906s 00:04:47.372 sys 0m3.195s 00:04:47.372 20:32:42 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.372 20:32:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.372 ************************************ 00:04:47.372 END TEST app_repeat 00:04:47.372 ************************************ 00:04:47.372 20:32:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:47.372 20:32:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.372 20:32:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.372 20:32:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.372 20:32:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.372 ************************************ 00:04:47.372 START TEST cpu_locks 00:04:47.372 ************************************ 00:04:47.372 20:32:42 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.630 * Looking for test storage... 00:04:47.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.630 20:32:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:47.630 20:32:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:47.630 20:32:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:47.630 20:32:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:47.630 20:32:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.630 20:32:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.630 20:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.630 ************************************ 00:04:47.630 START TEST default_locks 00:04:47.630 ************************************ 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1475761 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1475761 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1475761 ']' 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.630 20:32:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.630 [2024-07-24 20:32:43.039758] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:47.630 [2024-07-24 20:32:43.039835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475761 ] 00:04:47.630 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.630 [2024-07-24 20:32:43.096119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.887 [2024-07-24 20:32:43.204801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.144 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.144 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:48.144 20:32:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1475761 00:04:48.144 20:32:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1475761 00:04:48.144 20:32:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.401 lslocks: write error 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1475761 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1475761 ']' 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1475761 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1475761 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1475761' 00:04:48.401 killing process with pid 1475761 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1475761 00:04:48.401 20:32:43 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1475761 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1475761 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1475761 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1475761 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1475761 ']' 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1475761) - No such process 00:04:48.659 ERROR: process (pid: 1475761) is no longer running 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.659 00:04:48.659 real 0m1.227s 00:04:48.659 user 0m1.169s 00:04:48.659 sys 0m0.502s 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.659 20:32:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.659 ************************************ 00:04:48.659 END TEST default_locks 00:04:48.659 ************************************ 00:04:48.918 20:32:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:48.918 20:32:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.918 20:32:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.918 20:32:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.918 ************************************ 00:04:48.918 START TEST default_locks_via_rpc 00:04:48.918 ************************************ 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1475925 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1475925 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1475925 ']' 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.918 20:32:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.918 [2024-07-24 20:32:44.319099] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:48.918 [2024-07-24 20:32:44.319202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475925 ] 00:04:48.918 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.918 [2024-07-24 20:32:44.381048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.176 [2024-07-24 20:32:44.498575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1475925 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1475925 00:04:49.742 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1475925 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1475925 ']' 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1475925 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1475925 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1475925' 00:04:50.307 killing process with pid 1475925 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1475925 00:04:50.307 20:32:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1475925 00:04:50.566 00:04:50.566 real 0m1.802s 00:04:50.566 user 0m1.931s 00:04:50.566 sys 0m0.563s 00:04:50.566 20:32:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.566 20:32:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.566 ************************************ 00:04:50.566 END TEST default_locks_via_rpc 00:04:50.566 ************************************ 00:04:50.566 20:32:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:50.566 20:32:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.566 20:32:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.566 20:32:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.566 ************************************ 00:04:50.566 START TEST non_locking_app_on_locked_coremask 00:04:50.566 ************************************ 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1476218 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1476218 /var/tmp/spdk.sock 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1476218 ']' 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.566 20:32:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.824 [2024-07-24 20:32:46.173181] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:50.824 [2024-07-24 20:32:46.173290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476218 ] 00:04:50.824 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.824 [2024-07-24 20:32:46.235456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.824 [2024-07-24 20:32:46.349302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.792 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1476358 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1476358 /var/tmp/spdk2.sock 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1476358 ']' 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.793 20:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.793 [2024-07-24 20:32:47.155815] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:51.793 [2024-07-24 20:32:47.155913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476358 ] 00:04:51.793 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.793 [2024-07-24 20:32:47.252737] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.793 [2024-07-24 20:32:47.252777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.050 [2024-07-24 20:32:47.487257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.616 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.616 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:52.616 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1476218 00:04:52.616 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1476218 00:04:52.616 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.181 lslocks: write error 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1476218 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1476218 ']' 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1476218 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476218 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476218' 00:04:53.182 killing process with pid 1476218 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1476218 00:04:53.182 20:32:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1476218 00:04:54.112 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1476358 00:04:54.112 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1476358 ']' 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1476358 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476358 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476358' 00:04:54.113 killing process with pid 1476358 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1476358 00:04:54.113 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1476358 00:04:54.678 00:04:54.678 real 0m3.836s 00:04:54.678 user 0m4.153s 00:04:54.678 sys 0m1.105s 00:04:54.678 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.678 20:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.678 ************************************ 00:04:54.678 END TEST non_locking_app_on_locked_coremask 00:04:54.678 ************************************ 00:04:54.678 20:32:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:54.678 20:32:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.678 20:32:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.678 20:32:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.678 ************************************ 00:04:54.678 START TEST locking_app_on_unlocked_coremask 00:04:54.678 ************************************ 00:04:54.678 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:54.678 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1476664 00:04:54.678 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:54.678 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1476664 /var/tmp/spdk.sock 00:04:54.678 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1476664 ']' 00:04:54.678 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.678 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.679 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.679 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.679 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.679 [2024-07-24 20:32:50.057813] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:54.679 [2024-07-24 20:32:50.057930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476664 ] 00:04:54.679 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.679 [2024-07-24 20:32:50.123156] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.679 [2024-07-24 20:32:50.123204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.679 [2024-07-24 20:32:50.240828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1476800 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1476800 /var/tmp/spdk2.sock 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1476800 ']' 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.612 20:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 [2024-07-24 20:32:51.050144] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:55.612 [2024-07-24 20:32:51.050260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476800 ] 00:04:55.612 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.612 [2024-07-24 20:32:51.147020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.870 [2024-07-24 20:32:51.386267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.436 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.436 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:56.436 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1476800 00:04:56.436 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1476800 00:04:56.436 20:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.002 lslocks: write error 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1476664 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1476664 ']' 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1476664 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476664 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476664' 00:04:57.002 killing process with pid 1476664 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1476664 00:04:57.002 20:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1476664 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1476800 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1476800 ']' 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1476800 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476800 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476800' 00:04:57.936 killing process with pid 1476800 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1476800 00:04:57.936 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1476800 00:04:58.500 00:04:58.500 real 0m3.911s 00:04:58.500 user 0m4.260s 00:04:58.500 sys 0m1.046s 00:04:58.500 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.500 20:32:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.500 ************************************ 00:04:58.500 END TEST locking_app_on_unlocked_coremask 00:04:58.501 ************************************ 00:04:58.501 20:32:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:58.501 20:32:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.501 20:32:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.501 20:32:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 ************************************ 00:04:58.501 START TEST locking_app_on_locked_coremask 00:04:58.501 ************************************ 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1477232 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1477232 /var/tmp/spdk.sock 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1477232 ']' 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.501 20:32:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 [2024-07-24 20:32:54.021330] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:58.501 [2024-07-24 20:32:54.021417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477232 ] 00:04:58.501 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.759 [2024-07-24 20:32:54.079091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.759 [2024-07-24 20:32:54.189142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1477235 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1477235 /var/tmp/spdk2.sock 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1477235 /var/tmp/spdk2.sock 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1477235 /var/tmp/spdk2.sock 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1477235 ']' 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.017 20:32:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.017 [2024-07-24 20:32:54.499025] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:04:59.017 [2024-07-24 20:32:54.499099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477235 ] 00:04:59.017 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.275 [2024-07-24 20:32:54.591209] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1477232 has claimed it. 00:04:59.275 [2024-07-24 20:32:54.591281] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1477235) - No such process 00:04:59.839 ERROR: process (pid: 1477235) is no longer running 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1477232 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1477232 00:04:59.839 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.404 lslocks: write error 00:05:00.404 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1477232 00:05:00.404 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1477232 ']' 00:05:00.404 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1477232 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477232 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477232' 00:05:00.405 killing process with pid 1477232 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1477232 00:05:00.405 20:32:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1477232 00:05:00.662 00:05:00.662 real 0m2.218s 00:05:00.662 user 0m2.361s 00:05:00.662 sys 0m0.705s 00:05:00.662 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.662 20:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.662 ************************************ 00:05:00.662 END TEST locking_app_on_locked_coremask 00:05:00.662 ************************************ 00:05:00.662 20:32:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:00.662 20:32:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.662 20:32:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.662 20:32:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 ************************************ 00:05:00.921 START TEST locking_overlapped_coremask 00:05:00.921 ************************************ 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1477529 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1477529 /var/tmp/spdk.sock 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1477529 ']' 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.921 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 [2024-07-24 20:32:56.283907] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:00.921 [2024-07-24 20:32:56.284006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477529 ] 00:05:00.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.921 [2024-07-24 20:32:56.340784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.921 [2024-07-24 20:32:56.452362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.921 [2024-07-24 20:32:56.452385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.921 [2024-07-24 20:32:56.452387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1477535 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1477535 /var/tmp/spdk2.sock 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1477535 /var/tmp/spdk2.sock 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1477535 /var/tmp/spdk2.sock 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1477535 ']' 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.180 20:32:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.438 [2024-07-24 20:32:56.767341] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:01.438 [2024-07-24 20:32:56.767439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477535 ] 00:05:01.438 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.438 [2024-07-24 20:32:56.856995] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1477529 has claimed it. 00:05:01.438 [2024-07-24 20:32:56.857055] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1477535) - No such process 00:05:02.023 ERROR: process (pid: 1477535) is no longer running 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1477529 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1477529 ']' 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1477529 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477529 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477529' 00:05:02.023 killing process with pid 1477529 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1477529 00:05:02.023 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1477529 00:05:02.589 00:05:02.589 real 0m1.731s 00:05:02.590 user 0m4.592s 00:05:02.590 sys 0m0.470s 00:05:02.590 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.590 20:32:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.590 ************************************ 00:05:02.590 END TEST locking_overlapped_coremask 00:05:02.590 ************************************ 00:05:02.590 20:32:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:02.590 20:32:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.590 20:32:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.590 20:32:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.590 ************************************ 00:05:02.590 START TEST locking_overlapped_coremask_via_rpc 00:05:02.590 ************************************ 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1477799 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1477799 /var/tmp/spdk.sock 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1477799 ']' 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.590 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.590 [2024-07-24 20:32:58.068261] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:02.590 [2024-07-24 20:32:58.068383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477799 ] 00:05:02.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.590 [2024-07-24 20:32:58.130227] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.590 [2024-07-24 20:32:58.130266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.848 [2024-07-24 20:32:58.250007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.848 [2024-07-24 20:32:58.250055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.848 [2024-07-24 20:32:58.250058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.780 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.780 20:32:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1477843 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1477843 /var/tmp/spdk2.sock 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1477843 ']' 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.780 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.780 [2024-07-24 20:32:59.053998] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:03.780 [2024-07-24 20:32:59.054094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477843 ] 00:05:03.780 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.780 [2024-07-24 20:32:59.148440] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.780 [2024-07-24 20:32:59.148484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.038 [2024-07-24 20:32:59.371502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.038 [2024-07-24 20:32:59.371564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:04.039 [2024-07-24 20:32:59.371567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.604 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.604 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:04.604 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:04.604 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.604 20:32:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.604 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.604 [2024-07-24 20:33:00.013369] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1477799 has claimed it. 00:05:04.604 request: 00:05:04.604 { 00:05:04.604 "method": "framework_enable_cpumask_locks", 00:05:04.604 "req_id": 1 00:05:04.604 } 00:05:04.604 Got JSON-RPC error response 00:05:04.604 response: 00:05:04.605 { 00:05:04.605 "code": -32603, 00:05:04.605 "message": "Failed to claim CPU core: 2" 00:05:04.605 } 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1477799 /var/tmp/spdk.sock 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1477799 ']' 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.605 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1477843 /var/tmp/spdk2.sock 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1477843 ']' 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.862 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.120 00:05:05.120 real 0m2.540s 00:05:05.120 user 0m1.249s 00:05:05.120 sys 0m0.218s 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.120 20:33:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.120 ************************************ 00:05:05.120 END TEST locking_overlapped_coremask_via_rpc 00:05:05.120 ************************************ 00:05:05.120 20:33:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:05.120 20:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1477799 ]] 00:05:05.120 20:33:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1477799 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1477799 ']' 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1477799 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477799 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477799' 00:05:05.120 killing process with pid 1477799 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1477799 00:05:05.120 20:33:00 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1477799 00:05:05.686 20:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1477843 ]] 00:05:05.686 20:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1477843 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1477843 ']' 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1477843 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477843 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477843' 00:05:05.686 killing process with pid 1477843 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1477843 00:05:05.686 20:33:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1477843 00:05:05.943 20:33:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.943 20:33:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:05.943 20:33:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1477799 ]] 00:05:05.943 20:33:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1477799 00:05:05.943 20:33:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1477799 ']' 00:05:05.943 20:33:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1477799 00:05:05.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1477799) - No such process 00:05:05.943 20:33:01 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1477799 is not found' 00:05:05.943 Process with pid 1477799 is not found 00:05:05.944 20:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1477843 ]] 00:05:05.944 20:33:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1477843 00:05:05.944 20:33:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1477843 ']' 00:05:05.944 20:33:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1477843 00:05:05.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1477843) - No such process 00:05:05.944 20:33:01 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1477843 is not found' 00:05:05.944 Process with pid 1477843 is not found 00:05:05.944 20:33:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.944 00:05:05.944 real 0m18.580s 00:05:05.944 user 0m32.153s 00:05:05.944 sys 0m5.545s 00:05:05.944 20:33:01 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.944 20:33:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.944 ************************************ 00:05:05.944 END TEST cpu_locks 00:05:05.944 ************************************ 00:05:06.207 00:05:06.207 real 0m42.657s 00:05:06.207 user 1m20.125s 00:05:06.207 sys 0m9.568s 00:05:06.207 20:33:01 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.207 20:33:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.207 ************************************ 00:05:06.207 END TEST event 00:05:06.207 ************************************ 00:05:06.207 20:33:01 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:06.207 20:33:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.207 20:33:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.207 20:33:01 -- common/autotest_common.sh@10 -- # set +x 00:05:06.207 ************************************ 00:05:06.207 START TEST thread 00:05:06.207 ************************************ 00:05:06.207 20:33:01 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:06.207 * Looking for test storage... 00:05:06.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:06.207 20:33:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.207 20:33:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:06.207 20:33:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.207 20:33:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.207 ************************************ 00:05:06.207 START TEST thread_poller_perf 00:05:06.207 ************************************ 00:05:06.207 20:33:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.207 [2024-07-24 20:33:01.642097] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:06.207 [2024-07-24 20:33:01.642152] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478420 ] 00:05:06.207 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.207 [2024-07-24 20:33:01.700105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.515 [2024-07-24 20:33:01.810927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.515 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:07.444 ====================================== 00:05:07.444 busy:2707657265 (cyc) 00:05:07.444 total_run_count: 292000 00:05:07.444 tsc_hz: 2700000000 (cyc) 00:05:07.444 ====================================== 00:05:07.444 poller_cost: 9272 (cyc), 3434 (nsec) 00:05:07.444 00:05:07.444 real 0m1.308s 00:05:07.444 user 0m1.225s 00:05:07.444 sys 0m0.077s 00:05:07.444 20:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.444 20:33:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.444 ************************************ 00:05:07.444 END TEST thread_poller_perf 00:05:07.444 ************************************ 00:05:07.444 20:33:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.444 20:33:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:07.444 20:33:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.444 20:33:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.444 ************************************ 00:05:07.444 START TEST thread_poller_perf 00:05:07.444 ************************************ 00:05:07.444 20:33:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.444 [2024-07-24 20:33:02.998888] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:07.444 [2024-07-24 20:33:02.998949] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478599 ] 00:05:07.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.700 [2024-07-24 20:33:03.060487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.700 [2024-07-24 20:33:03.177975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.700 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:09.067 ====================================== 00:05:09.067 busy:2702621445 (cyc) 00:05:09.067 total_run_count: 3856000 00:05:09.067 tsc_hz: 2700000000 (cyc) 00:05:09.067 ====================================== 00:05:09.067 poller_cost: 700 (cyc), 259 (nsec) 00:05:09.067 00:05:09.067 real 0m1.317s 00:05:09.067 user 0m1.222s 00:05:09.067 sys 0m0.088s 00:05:09.067 20:33:04 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.067 20:33:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.067 ************************************ 00:05:09.067 END TEST thread_poller_perf 00:05:09.067 ************************************ 00:05:09.067 20:33:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:09.067 00:05:09.067 real 0m2.764s 00:05:09.067 user 0m2.499s 00:05:09.067 sys 0m0.261s 00:05:09.067 20:33:04 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.067 20:33:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.067 ************************************ 00:05:09.067 END TEST thread 00:05:09.067 ************************************ 00:05:09.067 20:33:04 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:09.067 20:33:04 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:09.067 20:33:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.067 20:33:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.067 20:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:09.067 ************************************ 00:05:09.067 START TEST app_cmdline 00:05:09.067 ************************************ 00:05:09.067 20:33:04 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:09.067 * Looking for test storage... 00:05:09.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:09.067 20:33:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:09.067 20:33:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1478796 00:05:09.067 20:33:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:09.067 20:33:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1478796 00:05:09.067 20:33:04 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1478796 ']' 00:05:09.067 20:33:04 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.068 20:33:04 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.068 20:33:04 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.068 20:33:04 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.068 20:33:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:09.068 [2024-07-24 20:33:04.466841] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:09.068 [2024-07-24 20:33:04.466927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478796 ] 00:05:09.068 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.068 [2024-07-24 20:33:04.523372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.068 [2024-07-24 20:33:04.628961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.630 20:33:04 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.631 20:33:04 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:09.631 20:33:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:09.631 { 00:05:09.631 "version": "SPDK v24.09-pre git sha1 2ce15115b", 00:05:09.631 "fields": { 00:05:09.631 "major": 24, 00:05:09.631 "minor": 9, 00:05:09.631 "patch": 0, 00:05:09.631 "suffix": "-pre", 00:05:09.631 "commit": "2ce15115b" 00:05:09.631 } 00:05:09.631 } 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:09.631 20:33:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:09.631 20:33:05 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:09.888 request: 00:05:09.888 { 00:05:09.888 "method": "env_dpdk_get_mem_stats", 00:05:09.888 "req_id": 1 00:05:09.888 } 00:05:09.888 Got JSON-RPC error response 00:05:09.888 response: 00:05:09.888 { 00:05:09.888 "code": -32601, 00:05:09.888 "message": "Method not found" 00:05:09.888 } 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:09.888 20:33:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1478796 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1478796 ']' 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1478796 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.888 20:33:05 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1478796 00:05:10.145 20:33:05 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.145 20:33:05 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.145 20:33:05 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1478796' 00:05:10.145 killing process with pid 1478796 00:05:10.145 20:33:05 app_cmdline -- common/autotest_common.sh@969 -- # kill 1478796 00:05:10.145 20:33:05 app_cmdline -- common/autotest_common.sh@974 -- # wait 1478796 00:05:10.402 00:05:10.402 real 0m1.554s 00:05:10.402 user 0m1.860s 00:05:10.402 sys 0m0.476s 00:05:10.402 20:33:05 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.402 20:33:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:10.402 ************************************ 00:05:10.402 END TEST app_cmdline 00:05:10.402 ************************************ 00:05:10.402 20:33:05 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:10.402 20:33:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.402 20:33:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.402 20:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:10.658 ************************************ 00:05:10.658 START TEST version 00:05:10.658 ************************************ 00:05:10.658 20:33:05 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:10.658 * Looking for test storage... 00:05:10.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:10.658 20:33:06 version -- app/version.sh@17 -- # get_header_version major 00:05:10.658 20:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # cut -f2 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.658 20:33:06 version -- app/version.sh@17 -- # major=24 00:05:10.658 20:33:06 version -- app/version.sh@18 -- # get_header_version minor 00:05:10.658 20:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # cut -f2 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.658 20:33:06 version -- app/version.sh@18 -- # minor=9 00:05:10.658 20:33:06 version -- app/version.sh@19 -- # get_header_version patch 00:05:10.658 20:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # cut -f2 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.658 20:33:06 version -- app/version.sh@19 -- # patch=0 00:05:10.658 20:33:06 version -- app/version.sh@20 -- # get_header_version suffix 00:05:10.658 20:33:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # cut -f2 00:05:10.658 20:33:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.658 20:33:06 version -- app/version.sh@20 -- # suffix=-pre 00:05:10.658 20:33:06 version -- app/version.sh@22 -- # version=24.9 00:05:10.658 20:33:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:10.658 20:33:06 version -- app/version.sh@28 -- # version=24.9rc0 00:05:10.658 20:33:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:10.658 20:33:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:10.658 20:33:06 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:10.658 20:33:06 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:10.658 00:05:10.658 real 0m0.105s 00:05:10.658 user 0m0.056s 00:05:10.658 sys 0m0.070s 00:05:10.658 20:33:06 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.658 20:33:06 version -- common/autotest_common.sh@10 -- # set +x 00:05:10.658 ************************************ 00:05:10.658 END TEST version 00:05:10.658 ************************************ 00:05:10.658 20:33:06 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@202 -- # uname -s 00:05:10.658 20:33:06 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:10.658 20:33:06 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:10.658 20:33:06 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:10.658 20:33:06 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@264 -- # timing_exit lib 00:05:10.658 20:33:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.658 20:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:10.658 20:33:06 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:05:10.658 20:33:06 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:05:10.658 20:33:06 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:10.658 20:33:06 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:10.658 20:33:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.658 20:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:10.658 ************************************ 00:05:10.658 START TEST nvmf_tcp 00:05:10.658 ************************************ 00:05:10.659 20:33:06 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:10.659 * Looking for test storage... 00:05:10.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:10.659 20:33:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:10.659 20:33:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:10.659 20:33:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:10.659 20:33:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:10.659 20:33:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.659 20:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.659 ************************************ 00:05:10.659 START TEST nvmf_target_core 00:05:10.659 ************************************ 00:05:10.659 20:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:10.916 * Looking for test storage... 00:05:10.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:10.916 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:10.917 ************************************ 00:05:10.917 START TEST nvmf_abort 00:05:10.917 ************************************ 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:10.917 * Looking for test storage... 00:05:10.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:10.917 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:10.918 20:33:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:12.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:12.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:12.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:12.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:12.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:12.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:05:12.815 00:05:12.815 --- 10.0.0.2 ping statistics --- 00:05:12.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.815 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:12.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:12.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:05:12.815 00:05:12.815 --- 10.0.0.1 ping statistics --- 00:05:12.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.815 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1481201 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1481201 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1481201 ']' 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.815 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.073 [2024-07-24 20:33:08.418529] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:13.073 [2024-07-24 20:33:08.418624] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:13.073 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.073 [2024-07-24 20:33:08.482692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.073 [2024-07-24 20:33:08.596382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:13.074 [2024-07-24 20:33:08.596439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:13.074 [2024-07-24 20:33:08.596454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.074 [2024-07-24 20:33:08.596466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.074 [2024-07-24 20:33:08.596476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:13.074 [2024-07-24 20:33:08.596565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.074 [2024-07-24 20:33:08.596594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.074 [2024-07-24 20:33:08.596596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 [2024-07-24 20:33:08.734513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 Malloc0 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 Delay0 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 [2024-07-24 20:33:08.812487] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.332 20:33:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:13.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.590 [2024-07-24 20:33:08.917164] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:15.490 Initializing NVMe Controllers 00:05:15.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:15.490 controller IO queue size 128 less than required 00:05:15.490 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:15.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:15.490 Initialization complete. Launching workers. 00:05:15.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30405 00:05:15.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30466, failed to submit 62 00:05:15.490 success 30409, unsuccess 57, failed 0 00:05:15.490 20:33:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:15.490 20:33:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.490 20:33:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.490 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.490 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:15.490 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:15.491 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:15.491 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:05:15.491 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:15.491 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:05:15.491 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:15.491 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:15.491 rmmod nvme_tcp 00:05:15.491 rmmod nvme_fabrics 00:05:15.491 rmmod nvme_keyring 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1481201 ']' 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1481201 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1481201 ']' 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1481201 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1481201 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1481201' 00:05:15.749 killing process with pid 1481201 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1481201 00:05:15.749 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1481201 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:16.008 20:33:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.909 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:17.909 00:05:17.909 real 0m7.162s 00:05:17.909 user 0m10.380s 00:05:17.909 sys 0m2.495s 00:05:17.909 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.909 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.909 ************************************ 00:05:17.909 END TEST nvmf_abort 00:05:17.909 ************************************ 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:18.167 ************************************ 00:05:18.167 START TEST nvmf_ns_hotplug_stress 00:05:18.167 ************************************ 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:18.167 * Looking for test storage... 00:05:18.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.167 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:05:18.168 20:33:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:20.071 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:20.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:20.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:20.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:20.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:20.072 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:20.073 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:20.073 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:20.073 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:20.073 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:20.073 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:20.073 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:20.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:20.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:05:20.073 00:05:20.073 --- 10.0.0.2 ping statistics --- 00:05:20.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.073 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:05:20.073 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:20.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:20.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:05:20.331 00:05:20.331 --- 10.0.0.1 ping statistics --- 00:05:20.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.331 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1483569 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1483569 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1483569 ']' 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.331 20:33:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.331 [2024-07-24 20:33:15.718586] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:05:20.331 [2024-07-24 20:33:15.718683] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:20.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.331 [2024-07-24 20:33:15.789609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.590 [2024-07-24 20:33:15.907142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:20.590 [2024-07-24 20:33:15.907186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:20.590 [2024-07-24 20:33:15.907211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.590 [2024-07-24 20:33:15.907222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.590 [2024-07-24 20:33:15.907268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:20.590 [2024-07-24 20:33:15.907326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.590 [2024-07-24 20:33:15.907353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.590 [2024-07-24 20:33:15.907356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:21.156 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:21.413 [2024-07-24 20:33:16.890465] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.413 20:33:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:21.671 20:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:21.928 [2024-07-24 20:33:17.409969] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:21.928 20:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:22.186 20:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:22.444 Malloc0 00:05:22.444 20:33:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:22.702 Delay0 00:05:22.702 20:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.993 20:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:23.265 NULL1 00:05:23.265 20:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:23.523 20:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1483998 00:05:23.523 20:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:23.523 20:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:23.523 20:33:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.523 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.781 20:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.039 20:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:24.039 20:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:24.297 true 00:05:24.297 20:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:24.297 20:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.554 20:33:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.812 20:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:24.812 20:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:25.070 true 00:05:25.070 20:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:25.070 20:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.327 20:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.585 20:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:25.585 20:33:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:25.842 true 00:05:25.842 20:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:25.842 20:33:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.775 Read completed with error (sct=0, sc=11) 00:05:26.775 20:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.033 20:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:27.033 20:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:27.291 true 00:05:27.291 20:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:27.291 20:33:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.222 20:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.222 20:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:28.222 20:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:28.478 true 00:05:28.478 20:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:28.478 20:33:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.735 20:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.992 20:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:28.992 20:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:29.250 true 00:05:29.250 20:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:29.250 20:33:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.182 20:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.439 20:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:30.439 20:33:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:30.439 true 00:05:30.696 20:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:30.696 20:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.696 20:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.954 20:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:30.954 20:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:31.211 true 00:05:31.211 20:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:31.211 20:33:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.143 20:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.400 20:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:32.400 20:33:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:32.772 true 00:05:32.772 20:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:32.772 20:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.029 20:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.287 20:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:33.287 20:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:33.545 true 00:05:33.545 20:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:33.545 20:33:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.480 20:33:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.737 20:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:34.737 20:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:34.995 true 00:05:34.995 20:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:34.995 20:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.253 20:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.510 20:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:35.510 20:33:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:35.510 true 00:05:35.768 20:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:35.768 20:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.701 20:33:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.701 20:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:36.701 20:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:36.959 true 00:05:36.959 20:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:36.959 20:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.217 20:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.475 20:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:37.475 20:33:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:37.732 true 00:05:37.732 20:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:37.732 20:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.743 20:33:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.743 20:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:38.743 20:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:39.000 true 00:05:39.000 20:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:39.000 20:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.258 20:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.515 20:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:39.515 20:33:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:39.773 true 00:05:39.773 20:33:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:39.773 20:33:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.705 20:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.963 20:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:40.963 20:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:41.220 true 00:05:41.220 20:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:41.220 20:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.478 20:33:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.736 20:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:41.736 20:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:41.994 true 00:05:41.994 20:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:41.994 20:33:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.926 20:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.926 20:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:42.926 20:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:43.184 true 00:05:43.184 20:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:43.184 20:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.441 20:33:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.699 20:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:43.699 20:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:43.957 true 00:05:43.957 20:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:43.957 20:33:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.890 20:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.147 20:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:45.147 20:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:45.405 true 00:05:45.405 20:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:45.405 20:33:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.663 20:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.920 20:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:45.920 20:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:46.177 true 00:05:46.177 20:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:46.177 20:33:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.109 20:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.366 20:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:47.366 20:33:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:47.623 true 00:05:47.623 20:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:47.623 20:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.880 20:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.137 20:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:48.138 20:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:48.394 true 00:05:48.394 20:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:48.394 20:33:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.326 20:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.582 20:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:49.582 20:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:49.582 true 00:05:49.582 20:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:49.582 20:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.839 20:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.096 20:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:50.096 20:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:50.353 true 00:05:50.353 20:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:50.353 20:33:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.287 20:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.544 20:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:51.544 20:33:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:51.801 true 00:05:51.801 20:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:51.801 20:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.059 20:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.317 20:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:52.317 20:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:52.575 true 00:05:52.575 20:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:52.575 20:33:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.478 20:33:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.478 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:53.478 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:53.735 Initializing NVMe Controllers 00:05:53.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:53.736 Controller IO queue size 128, less than required. 00:05:53.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:53.736 Controller IO queue size 128, less than required. 00:05:53.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:53.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:53.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:53.736 Initialization complete. Launching workers. 00:05:53.736 ======================================================== 00:05:53.736 Latency(us) 00:05:53.736 Device Information : IOPS MiB/s Average min max 00:05:53.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 899.44 0.44 74964.35 2469.05 1012814.46 00:05:53.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11147.33 5.44 11448.18 3637.54 447428.35 00:05:53.736 ======================================================== 00:05:53.736 Total : 12046.77 5.88 16190.44 2469.05 1012814.46 00:05:53.736 00:05:53.736 true 00:05:53.736 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1483998 00:05:53.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1483998) - No such process 00:05:53.736 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1483998 00:05:53.736 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.993 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.250 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:54.250 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:54.251 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:54.251 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:54.251 20:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:54.508 null0 00:05:54.508 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:54.508 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:54.508 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:54.766 null1 00:05:54.766 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:54.766 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:54.766 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:55.025 null2 00:05:55.025 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.025 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.025 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:55.282 null3 00:05:55.282 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.282 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.282 20:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:55.540 null4 00:05:55.540 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.540 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.540 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:55.798 null5 00:05:55.798 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.798 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.798 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:56.055 null6 00:05:56.055 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.055 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.055 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:56.314 null7 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:56.314 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.315 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.315 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1488061 1488062 1488064 1488066 1488068 1488070 1488072 1488074 00:05:56.315 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.315 20:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.572 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.572 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.572 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.573 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.573 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.573 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.573 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.573 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.830 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.831 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.831 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.831 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.831 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.831 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.088 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.346 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.347 20:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.605 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.863 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.121 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.379 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.637 20:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.895 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.153 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.411 20:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.669 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.670 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.928 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.185 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.185 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.185 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.185 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.185 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.185 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.186 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.443 20:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.701 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.959 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.216 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.474 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.474 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.474 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.474 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.474 20:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.474 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.474 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.474 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:01.732 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:01.733 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:01.733 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:01.733 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:01.733 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:01.990 rmmod nvme_tcp 00:06:01.990 rmmod nvme_fabrics 00:06:01.990 rmmod nvme_keyring 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1483569 ']' 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1483569 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1483569 ']' 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1483569 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483569 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483569' 00:06:01.990 killing process with pid 1483569 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1483569 00:06:01.990 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1483569 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.248 20:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.147 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:04.147 00:06:04.147 real 0m46.196s 00:06:04.147 user 3m30.490s 00:06:04.147 sys 0m16.183s 00:06:04.147 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.147 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.147 ************************************ 00:06:04.147 END TEST nvmf_ns_hotplug_stress 00:06:04.147 ************************************ 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:04.406 ************************************ 00:06:04.406 START TEST nvmf_delete_subsystem 00:06:04.406 ************************************ 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:04.406 * Looking for test storage... 00:06:04.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:04.406 20:33:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:06.305 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:06.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:06.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:06.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:06.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:06.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:06.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:06:06.306 00:06:06.306 --- 10.0.0.2 ping statistics --- 00:06:06.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.306 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:06.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:06.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:06:06.306 00:06:06.306 --- 10.0.0.1 ping statistics --- 00:06:06.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.306 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1490822 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1490822 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1490822 ']' 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.306 20:34:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.565 [2024-07-24 20:34:01.914430] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:06:06.565 [2024-07-24 20:34:01.914505] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:06.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.565 [2024-07-24 20:34:01.984622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.565 [2024-07-24 20:34:02.101087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:06.565 [2024-07-24 20:34:02.101150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:06.565 [2024-07-24 20:34:02.101176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.565 [2024-07-24 20:34:02.101198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.565 [2024-07-24 20:34:02.101211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:06.565 [2024-07-24 20:34:02.101308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.565 [2024-07-24 20:34:02.101313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.499 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.499 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:07.499 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 [2024-07-24 20:34:02.898577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 [2024-07-24 20:34:02.914907] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 NULL1 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 Delay0 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1490975 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:07.500 20:34:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:07.500 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.500 [2024-07-24 20:34:02.989552] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:09.430 20:34:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:09.430 20:34:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.430 20:34:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 starting I/O failed: -6 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Write completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 starting I/O failed: -6 00:06:09.687 Write completed with error (sct=0, sc=8) 00:06:09.687 Write completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Write completed with error (sct=0, sc=8) 00:06:09.687 starting I/O failed: -6 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Write completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 starting I/O failed: -6 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 Read completed with error (sct=0, sc=8) 00:06:09.687 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 [2024-07-24 20:34:05.078103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee83e0 is same with the state(5) to be set 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 starting I/O failed: -6 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 [2024-07-24 20:34:05.078898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6f7400d330 is same with the state(5) to be set 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Write completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.688 Read completed with error (sct=0, sc=8) 00:06:09.689 Read completed with error (sct=0, sc=8) 00:06:09.689 Read completed with error (sct=0, sc=8) 00:06:09.689 Read completed with error (sct=0, sc=8) 00:06:09.689 Write completed with error (sct=0, sc=8) 00:06:09.689 Read completed with error (sct=0, sc=8) 00:06:10.618 [2024-07-24 20:34:06.046691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee9ac0 is same with the state(5) to be set 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 [2024-07-24 20:34:06.077983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee85c0 is same with the state(5) to be set 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 [2024-07-24 20:34:06.080280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee8c20 is same with the state(5) to be set 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.618 Write completed with error (sct=0, sc=8) 00:06:10.618 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 [2024-07-24 20:34:06.080801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6f7400d000 is same with the state(5) to be set 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Write completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 Read completed with error (sct=0, sc=8) 00:06:10.619 [2024-07-24 20:34:06.081495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6f7400d660 is same with the state(5) to be set 00:06:10.619 Initializing NVMe Controllers 00:06:10.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:10.619 Controller IO queue size 128, less than required. 00:06:10.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:10.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:10.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:10.619 Initialization complete. Launching workers. 00:06:10.619 ======================================================== 00:06:10.619 Latency(us) 00:06:10.619 Device Information : IOPS MiB/s Average min max 00:06:10.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.78 0.08 906198.08 650.02 1011974.56 00:06:10.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.72 0.08 890732.70 446.01 1013606.97 00:06:10.619 ======================================================== 00:06:10.619 Total : 337.50 0.16 898283.45 446.01 1013606.97 00:06:10.619 00:06:10.619 [2024-07-24 20:34:06.081982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee9ac0 (9): Bad file descriptor 00:06:10.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:10.619 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.619 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:10.619 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1490975 00:06:10.619 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1490975 00:06:11.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1490975) - No such process 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1490975 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1490975 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1490975 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.182 [2024-07-24 20:34:06.605959] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1491385 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.182 20:34:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:11.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.182 [2024-07-24 20:34:06.671749] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:11.747 20:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.747 20:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:11.747 20:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.311 20:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:12.311 20:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:12.311 20:34:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.568 20:34:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:12.568 20:34:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:12.568 20:34:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:13.131 20:34:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.131 20:34:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:13.132 20:34:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:13.694 20:34:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.694 20:34:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:13.694 20:34:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.256 20:34:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.256 20:34:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:14.256 20:34:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.514 Initializing NVMe Controllers 00:06:14.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:14.514 Controller IO queue size 128, less than required. 00:06:14.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:14.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:14.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:14.514 Initialization complete. Launching workers. 00:06:14.514 ======================================================== 00:06:14.514 Latency(us) 00:06:14.514 Device Information : IOPS MiB/s Average min max 00:06:14.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004603.34 1000182.16 1041400.16 00:06:14.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006183.33 1000180.70 1043358.65 00:06:14.514 ======================================================== 00:06:14.514 Total : 256.00 0.12 1005393.33 1000180.70 1043358.65 00:06:14.514 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1491385 00:06:14.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1491385) - No such process 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1491385 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:14.773 rmmod nvme_tcp 00:06:14.773 rmmod nvme_fabrics 00:06:14.773 rmmod nvme_keyring 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1490822 ']' 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1490822 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1490822 ']' 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1490822 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1490822 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1490822' 00:06:14.773 killing process with pid 1490822 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1490822 00:06:14.773 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1490822 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.031 20:34:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.556 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.556 00:06:17.556 real 0m12.797s 00:06:17.556 user 0m29.230s 00:06:17.556 sys 0m2.889s 00:06:17.556 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.556 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.556 ************************************ 00:06:17.556 END TEST nvmf_delete_subsystem 00:06:17.556 ************************************ 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:17.557 ************************************ 00:06:17.557 START TEST nvmf_host_management 00:06:17.557 ************************************ 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:17.557 * Looking for test storage... 00:06:17.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:06:17.557 20:34:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:19.452 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:19.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:19.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:19.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:19.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:19.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:06:19.453 00:06:19.453 --- 10.0.0.2 ping statistics --- 00:06:19.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.453 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:06:19.453 00:06:19.453 --- 10.0.0.1 ping statistics --- 00:06:19.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.453 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1493727 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1493727 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1493727 ']' 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.453 20:34:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.453 [2024-07-24 20:34:14.719107] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:06:19.453 [2024-07-24 20:34:14.719192] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.453 [2024-07-24 20:34:14.787346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.453 [2024-07-24 20:34:14.905920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.453 [2024-07-24 20:34:14.905983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.453 [2024-07-24 20:34:14.905999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.453 [2024-07-24 20:34:14.906013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.454 [2024-07-24 20:34:14.906024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.454 [2024-07-24 20:34:14.906119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.454 [2024-07-24 20:34:14.906235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.454 [2024-07-24 20:34:14.906368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.454 [2024-07-24 20:34:14.906372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.385 [2024-07-24 20:34:15.679853] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.385 Malloc0 00:06:20.385 [2024-07-24 20:34:15.741109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1493902 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1493902 /var/tmp/bdevperf.sock 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1493902 ']' 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:20.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:20.385 { 00:06:20.385 "params": { 00:06:20.385 "name": "Nvme$subsystem", 00:06:20.385 "trtype": "$TEST_TRANSPORT", 00:06:20.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:20.385 "adrfam": "ipv4", 00:06:20.385 "trsvcid": "$NVMF_PORT", 00:06:20.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:20.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:20.385 "hdgst": ${hdgst:-false}, 00:06:20.385 "ddgst": ${ddgst:-false} 00:06:20.385 }, 00:06:20.385 "method": "bdev_nvme_attach_controller" 00:06:20.385 } 00:06:20.385 EOF 00:06:20.385 )") 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:20.385 20:34:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:20.385 "params": { 00:06:20.385 "name": "Nvme0", 00:06:20.385 "trtype": "tcp", 00:06:20.385 "traddr": "10.0.0.2", 00:06:20.385 "adrfam": "ipv4", 00:06:20.385 "trsvcid": "4420", 00:06:20.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:20.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:20.385 "hdgst": false, 00:06:20.385 "ddgst": false 00:06:20.385 }, 00:06:20.385 "method": "bdev_nvme_attach_controller" 00:06:20.385 }' 00:06:20.385 [2024-07-24 20:34:15.822746] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:06:20.385 [2024-07-24 20:34:15.822817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493902 ] 00:06:20.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.385 [2024-07-24 20:34:15.883020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.643 [2024-07-24 20:34:15.993374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.900 Running I/O for 10 seconds... 00:06:20.900 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.900 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:20.900 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:20.900 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.900 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.900 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.900 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:20.901 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=534 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 534 -ge 100 ']' 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.159 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.159 [2024-07-24 20:34:16.611938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.159 [2024-07-24 20:34:16.612181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.160 [2024-07-24 20:34:16.612194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.160 [2024-07-24 20:34:16.612205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d690 is same with the state(5) to be set 00:06:21.160 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.160 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:21.160 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.160 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:21.160 [2024-07-24 20:34:16.623836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.160 [2024-07-24 20:34:16.623878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.623896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.160 [2024-07-24 20:34:16.623909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.623923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.160 [2024-07-24 20:34:16.623936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.623949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.160 [2024-07-24 20:34:16.623962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.623974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126f790 is same with the state(5) to be set 00:06:21.160 [2024-07-24 20:34:16.624065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:12 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.160 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 20:34:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:21.160 [2024-07-24 20:34:16.624263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.160 [2024-07-24 20:34:16.624901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.160 [2024-07-24 20:34:16.624916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.624931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.624946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.624963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.624979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.624992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.625968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.161 [2024-07-24 20:34:16.625982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:21.161 [2024-07-24 20:34:16.626063] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16805a0 was disconnected and freed. reset controller. 00:06:21.161 [2024-07-24 20:34:16.627221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:21.161 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:21.161 00:06:21.161 Latency(us) 00:06:21.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:21.161 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:21.161 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:21.161 Verification LBA range: start 0x0 length 0x400 00:06:21.161 Nvme0n1 : 0.41 1559.39 97.46 155.94 0.00 36256.44 2669.99 34175.81 00:06:21.161 =================================================================================================================== 00:06:21.162 Total : 1559.39 97.46 155.94 0.00 36256.44 2669.99 34175.81 00:06:21.162 [2024-07-24 20:34:16.629080] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.162 [2024-07-24 20:34:16.629123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126f790 (9): Bad file descriptor 00:06:21.419 [2024-07-24 20:34:16.761402] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1493902 00:06:22.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1493902) - No such process 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:22.351 { 00:06:22.351 "params": { 00:06:22.351 "name": "Nvme$subsystem", 00:06:22.351 "trtype": "$TEST_TRANSPORT", 00:06:22.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:22.351 "adrfam": "ipv4", 00:06:22.351 "trsvcid": "$NVMF_PORT", 00:06:22.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:22.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:22.351 "hdgst": ${hdgst:-false}, 00:06:22.351 "ddgst": ${ddgst:-false} 00:06:22.351 }, 00:06:22.351 "method": "bdev_nvme_attach_controller" 00:06:22.351 } 00:06:22.351 EOF 00:06:22.351 )") 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:22.351 20:34:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:22.351 "params": { 00:06:22.351 "name": "Nvme0", 00:06:22.351 "trtype": "tcp", 00:06:22.351 "traddr": "10.0.0.2", 00:06:22.351 "adrfam": "ipv4", 00:06:22.351 "trsvcid": "4420", 00:06:22.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:22.351 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:22.351 "hdgst": false, 00:06:22.351 "ddgst": false 00:06:22.351 }, 00:06:22.351 "method": "bdev_nvme_attach_controller" 00:06:22.351 }' 00:06:22.351 [2024-07-24 20:34:17.673466] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:06:22.351 [2024-07-24 20:34:17.673574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494173 ] 00:06:22.351 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.351 [2024-07-24 20:34:17.735549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.351 [2024-07-24 20:34:17.847368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.608 Running I/O for 1 seconds... 00:06:23.980 00:06:23.980 Latency(us) 00:06:23.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.980 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:23.980 Verification LBA range: start 0x0 length 0x400 00:06:23.980 Nvme0n1 : 1.03 1613.08 100.82 0.00 0.00 39047.99 8543.95 32622.36 00:06:23.980 =================================================================================================================== 00:06:23.980 Total : 1613.08 100.82 0.00 0.00 39047.99 8543.95 32622.36 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:23.980 rmmod nvme_tcp 00:06:23.980 rmmod nvme_fabrics 00:06:23.980 rmmod nvme_keyring 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1493727 ']' 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1493727 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1493727 ']' 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1493727 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.980 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1493727 00:06:24.238 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:24.238 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:24.238 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1493727' 00:06:24.238 killing process with pid 1493727 00:06:24.238 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1493727 00:06:24.238 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1493727 00:06:24.497 [2024-07-24 20:34:19.815650] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.497 20:34:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:26.430 00:06:26.430 real 0m9.289s 00:06:26.430 user 0m22.952s 00:06:26.430 sys 0m2.593s 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:26.430 ************************************ 00:06:26.430 END TEST nvmf_host_management 00:06:26.430 ************************************ 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.430 ************************************ 00:06:26.430 START TEST nvmf_lvol 00:06:26.430 ************************************ 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:26.430 * Looking for test storage... 00:06:26.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.430 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:26.689 20:34:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:06:26.689 20:34:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:28.589 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:28.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:28.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:28.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:28.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:28.590 20:34:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:28.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:06:28.590 00:06:28.590 --- 10.0.0.2 ping statistics --- 00:06:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.590 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:06:28.590 00:06:28.590 --- 10.0.0.1 ping statistics --- 00:06:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.590 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1496263 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1496263 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1496263 ']' 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.590 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:28.590 [2024-07-24 20:34:24.101500] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:06:28.590 [2024-07-24 20:34:24.101589] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.848 [2024-07-24 20:34:24.169699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.848 [2024-07-24 20:34:24.287259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.848 [2024-07-24 20:34:24.287320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.848 [2024-07-24 20:34:24.287336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.848 [2024-07-24 20:34:24.287349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.848 [2024-07-24 20:34:24.287360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.848 [2024-07-24 20:34:24.287423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.848 [2024-07-24 20:34:24.287483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.848 [2024-07-24 20:34:24.287487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.848 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.848 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:28.848 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.848 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.848 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.106 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.106 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:29.106 [2024-07-24 20:34:24.667457] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.363 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.621 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:29.621 20:34:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.878 20:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:29.879 20:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:30.136 20:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:30.394 20:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e98c64f6-f299-4e77-89b4-0912a21435ec 00:06:30.394 20:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e98c64f6-f299-4e77-89b4-0912a21435ec lvol 20 00:06:30.651 20:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2b1e4798-d20d-4cb5-8873-19ed800cf8d0 00:06:30.651 20:34:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:30.909 20:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2b1e4798-d20d-4cb5-8873-19ed800cf8d0 00:06:31.166 20:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.166 [2024-07-24 20:34:26.706501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.166 20:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.424 20:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1496693 00:06:31.424 20:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:31.424 20:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:31.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.614 20:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2b1e4798-d20d-4cb5-8873-19ed800cf8d0 MY_SNAPSHOT 00:06:32.872 20:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d25db744-b96b-43ee-b337-a35d4a0f4fd4 00:06:32.872 20:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2b1e4798-d20d-4cb5-8873-19ed800cf8d0 30 00:06:33.130 20:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d25db744-b96b-43ee-b337-a35d4a0f4fd4 MY_CLONE 00:06:33.388 20:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a2be8ab9-a815-496a-836d-af940a7e0077 00:06:33.388 20:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a2be8ab9-a815-496a-836d-af940a7e0077 00:06:34.321 20:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1496693 00:06:42.424 Initializing NVMe Controllers 00:06:42.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:42.424 Controller IO queue size 128, less than required. 00:06:42.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:42.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:42.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:42.424 Initialization complete. Launching workers. 00:06:42.424 ======================================================== 00:06:42.424 Latency(us) 00:06:42.424 Device Information : IOPS MiB/s Average min max 00:06:42.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9959.40 38.90 12859.98 716.97 92783.15 00:06:42.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10531.90 41.14 12161.70 2170.48 72468.84 00:06:42.424 ======================================================== 00:06:42.424 Total : 20491.30 80.04 12501.09 716.97 92783.15 00:06:42.424 00:06:42.424 20:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:42.424 20:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2b1e4798-d20d-4cb5-8873-19ed800cf8d0 00:06:42.424 20:34:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e98c64f6-f299-4e77-89b4-0912a21435ec 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:42.683 rmmod nvme_tcp 00:06:42.683 rmmod nvme_fabrics 00:06:42.683 rmmod nvme_keyring 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1496263 ']' 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1496263 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1496263 ']' 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1496263 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1496263 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1496263' 00:06:42.683 killing process with pid 1496263 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1496263 00:06:42.683 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1496263 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.248 20:34:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.146 00:06:45.146 real 0m18.652s 00:06:45.146 user 1m3.082s 00:06:45.146 sys 0m5.722s 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.146 ************************************ 00:06:45.146 END TEST nvmf_lvol 00:06:45.146 ************************************ 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.146 ************************************ 00:06:45.146 START TEST nvmf_lvs_grow 00:06:45.146 ************************************ 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:45.146 * Looking for test storage... 00:06:45.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.146 20:34:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.712 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:06:47.713 00:06:47.713 --- 10.0.0.2 ping statistics --- 00:06:47.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.713 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:06:47.713 00:06:47.713 --- 10.0.0.1 ping statistics --- 00:06:47.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.713 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1499955 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:47.713 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1499955 00:06:47.714 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1499955 ']' 00:06:47.714 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.714 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.714 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.714 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.714 20:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:47.714 [2024-07-24 20:34:42.889408] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:06:47.714 [2024-07-24 20:34:42.889501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.714 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.714 [2024-07-24 20:34:42.953493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.714 [2024-07-24 20:34:43.063503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.714 [2024-07-24 20:34:43.063563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.714 [2024-07-24 20:34:43.063576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.714 [2024-07-24 20:34:43.063588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.714 [2024-07-24 20:34:43.063598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.714 [2024-07-24 20:34:43.063625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.714 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.714 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:47.714 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.714 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.714 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:47.714 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.714 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:47.970 [2024-07-24 20:34:43.458225] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 ************************************ 00:06:47.970 START TEST lvs_grow_clean 00:06:47.970 ************************************ 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:47.970 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:48.226 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:48.226 20:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:48.484 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:06:48.484 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:06:48.484 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:48.742 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:48.742 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:48.742 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b lvol 150 00:06:48.999 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=40366384-8136-4734-90eb-58904488eb90 00:06:48.999 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:48.999 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:49.257 [2024-07-24 20:34:44.768461] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:49.257 [2024-07-24 20:34:44.768561] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:49.257 true 00:06:49.257 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:06:49.257 20:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:49.515 20:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:49.515 20:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:49.773 20:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40366384-8136-4734-90eb-58904488eb90 00:06:50.031 20:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.289 [2024-07-24 20:34:45.799686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.289 20:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1500401 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1500401 /var/tmp/bdevperf.sock 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1500401 ']' 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:50.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.545 20:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:50.545 [2024-07-24 20:34:46.111280] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:06:50.545 [2024-07-24 20:34:46.111360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500401 ] 00:06:50.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.803 [2024-07-24 20:34:46.172074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.803 [2024-07-24 20:34:46.287579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.735 20:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.735 20:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:51.735 20:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:51.991 Nvme0n1 00:06:52.248 20:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:52.248 [ 00:06:52.248 { 00:06:52.248 "name": "Nvme0n1", 00:06:52.248 "aliases": [ 00:06:52.248 "40366384-8136-4734-90eb-58904488eb90" 00:06:52.248 ], 00:06:52.248 "product_name": "NVMe disk", 00:06:52.248 "block_size": 4096, 00:06:52.248 "num_blocks": 38912, 00:06:52.248 "uuid": "40366384-8136-4734-90eb-58904488eb90", 00:06:52.248 "assigned_rate_limits": { 00:06:52.248 "rw_ios_per_sec": 0, 00:06:52.248 "rw_mbytes_per_sec": 0, 00:06:52.248 "r_mbytes_per_sec": 0, 00:06:52.248 "w_mbytes_per_sec": 0 00:06:52.248 }, 00:06:52.248 "claimed": false, 00:06:52.248 "zoned": false, 00:06:52.248 "supported_io_types": { 00:06:52.248 "read": true, 00:06:52.248 "write": true, 00:06:52.248 "unmap": true, 00:06:52.248 "flush": true, 00:06:52.248 "reset": true, 00:06:52.248 "nvme_admin": true, 00:06:52.248 "nvme_io": true, 00:06:52.248 "nvme_io_md": false, 00:06:52.248 "write_zeroes": true, 00:06:52.248 "zcopy": false, 00:06:52.248 "get_zone_info": false, 00:06:52.248 "zone_management": false, 00:06:52.248 "zone_append": false, 00:06:52.248 "compare": true, 00:06:52.248 "compare_and_write": true, 00:06:52.248 "abort": true, 00:06:52.248 "seek_hole": false, 00:06:52.248 "seek_data": false, 00:06:52.249 "copy": true, 00:06:52.249 "nvme_iov_md": false 00:06:52.249 }, 00:06:52.249 "memory_domains": [ 00:06:52.249 { 00:06:52.249 "dma_device_id": "system", 00:06:52.249 "dma_device_type": 1 00:06:52.249 } 00:06:52.249 ], 00:06:52.249 "driver_specific": { 00:06:52.249 "nvme": [ 00:06:52.249 { 00:06:52.249 "trid": { 00:06:52.249 "trtype": "TCP", 00:06:52.249 "adrfam": "IPv4", 00:06:52.249 "traddr": "10.0.0.2", 00:06:52.249 "trsvcid": "4420", 00:06:52.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:52.249 }, 00:06:52.249 "ctrlr_data": { 00:06:52.249 "cntlid": 1, 00:06:52.249 "vendor_id": "0x8086", 00:06:52.249 "model_number": "SPDK bdev Controller", 00:06:52.249 "serial_number": "SPDK0", 00:06:52.249 "firmware_revision": "24.09", 00:06:52.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:52.249 "oacs": { 00:06:52.249 "security": 0, 00:06:52.249 "format": 0, 00:06:52.249 "firmware": 0, 00:06:52.249 "ns_manage": 0 00:06:52.249 }, 00:06:52.249 "multi_ctrlr": true, 00:06:52.249 "ana_reporting": false 00:06:52.249 }, 00:06:52.249 "vs": { 00:06:52.249 "nvme_version": "1.3" 00:06:52.249 }, 00:06:52.249 "ns_data": { 00:06:52.249 "id": 1, 00:06:52.249 "can_share": true 00:06:52.249 } 00:06:52.249 } 00:06:52.249 ], 00:06:52.249 "mp_policy": "active_passive" 00:06:52.249 } 00:06:52.249 } 00:06:52.249 ] 00:06:52.249 20:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1500547 00:06:52.249 20:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:52.249 20:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:52.506 Running I/O for 10 seconds... 00:06:53.436 Latency(us) 00:06:53.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.436 Nvme0n1 : 1.00 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:06:53.436 =================================================================================================================== 00:06:53.436 Total : 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:06:53.436 00:06:54.371 20:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:06:54.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.371 Nvme0n1 : 2.00 14478.50 56.56 0.00 0.00 0.00 0.00 0.00 00:06:54.371 =================================================================================================================== 00:06:54.371 Total : 14478.50 56.56 0.00 0.00 0.00 0.00 0.00 00:06:54.371 00:06:54.628 true 00:06:54.628 20:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:06:54.628 20:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:54.885 20:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:54.885 20:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:54.885 20:34:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1500547 00:06:55.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.450 Nvme0n1 : 3.00 14520.67 56.72 0.00 0.00 0.00 0.00 0.00 00:06:55.450 =================================================================================================================== 00:06:55.450 Total : 14520.67 56.72 0.00 0.00 0.00 0.00 0.00 00:06:55.450 00:06:56.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.382 Nvme0n1 : 4.00 14573.50 56.93 0.00 0.00 0.00 0.00 0.00 00:06:56.382 =================================================================================================================== 00:06:56.382 Total : 14573.50 56.93 0.00 0.00 0.00 0.00 0.00 00:06:56.382 00:06:57.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.752 Nvme0n1 : 5.00 14618.20 57.10 0.00 0.00 0.00 0.00 0.00 00:06:57.752 =================================================================================================================== 00:06:57.752 Total : 14618.20 57.10 0.00 0.00 0.00 0.00 0.00 00:06:57.752 00:06:58.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.683 Nvme0n1 : 6.00 14629.50 57.15 0.00 0.00 0.00 0.00 0.00 00:06:58.683 =================================================================================================================== 00:06:58.683 Total : 14629.50 57.15 0.00 0.00 0.00 0.00 0.00 00:06:58.683 00:06:59.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.616 Nvme0n1 : 7.00 14662.29 57.27 0.00 0.00 0.00 0.00 0.00 00:06:59.616 =================================================================================================================== 00:06:59.616 Total : 14662.29 57.27 0.00 0.00 0.00 0.00 0.00 00:06:59.616 00:07:00.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.548 Nvme0n1 : 8.00 14702.75 57.43 0.00 0.00 0.00 0.00 0.00 00:07:00.548 =================================================================================================================== 00:07:00.548 Total : 14702.75 57.43 0.00 0.00 0.00 0.00 0.00 00:07:00.548 00:07:01.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.481 Nvme0n1 : 9.00 14713.22 57.47 0.00 0.00 0.00 0.00 0.00 00:07:01.481 =================================================================================================================== 00:07:01.481 Total : 14713.22 57.47 0.00 0.00 0.00 0.00 0.00 00:07:01.481 00:07:02.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.420 Nvme0n1 : 10.00 14728.10 57.53 0.00 0.00 0.00 0.00 0.00 00:07:02.420 =================================================================================================================== 00:07:02.420 Total : 14728.10 57.53 0.00 0.00 0.00 0.00 0.00 00:07:02.420 00:07:02.420 00:07:02.420 Latency(us) 00:07:02.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.420 Nvme0n1 : 10.01 14729.20 57.54 0.00 0.00 8685.22 5218.61 17476.27 00:07:02.420 =================================================================================================================== 00:07:02.420 Total : 14729.20 57.54 0.00 0.00 8685.22 5218.61 17476.27 00:07:02.420 0 00:07:02.420 20:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1500401 00:07:02.420 20:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1500401 ']' 00:07:02.420 20:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1500401 00:07:02.420 20:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:02.420 20:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.420 20:34:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1500401 00:07:02.724 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:02.724 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:02.724 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1500401' 00:07:02.724 killing process with pid 1500401 00:07:02.724 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1500401 00:07:02.724 Received shutdown signal, test time was about 10.000000 seconds 00:07:02.724 00:07:02.724 Latency(us) 00:07:02.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.724 =================================================================================================================== 00:07:02.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:02.724 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1500401 00:07:02.724 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:02.981 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:03.547 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:07:03.547 20:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:03.547 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:03.547 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:03.547 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:03.805 [2024-07-24 20:34:59.288040] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:03.805 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:07:03.805 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:03.805 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:07:03.805 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.805 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.805 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.806 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.806 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.806 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.806 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.806 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:03.806 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:07:04.063 request: 00:07:04.063 { 00:07:04.063 "uuid": "1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b", 00:07:04.063 "method": "bdev_lvol_get_lvstores", 00:07:04.063 "req_id": 1 00:07:04.063 } 00:07:04.063 Got JSON-RPC error response 00:07:04.063 response: 00:07:04.063 { 00:07:04.063 "code": -19, 00:07:04.063 "message": "No such device" 00:07:04.063 } 00:07:04.064 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:04.064 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.064 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.064 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.064 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:04.629 aio_bdev 00:07:04.629 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 40366384-8136-4734-90eb-58904488eb90 00:07:04.629 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=40366384-8136-4734-90eb-58904488eb90 00:07:04.629 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:04.629 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:04.629 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:04.629 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:04.629 20:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:04.887 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 40366384-8136-4734-90eb-58904488eb90 -t 2000 00:07:05.145 [ 00:07:05.145 { 00:07:05.145 "name": "40366384-8136-4734-90eb-58904488eb90", 00:07:05.145 "aliases": [ 00:07:05.145 "lvs/lvol" 00:07:05.145 ], 00:07:05.145 "product_name": "Logical Volume", 00:07:05.145 "block_size": 4096, 00:07:05.145 "num_blocks": 38912, 00:07:05.145 "uuid": "40366384-8136-4734-90eb-58904488eb90", 00:07:05.145 "assigned_rate_limits": { 00:07:05.145 "rw_ios_per_sec": 0, 00:07:05.145 "rw_mbytes_per_sec": 0, 00:07:05.145 "r_mbytes_per_sec": 0, 00:07:05.145 "w_mbytes_per_sec": 0 00:07:05.145 }, 00:07:05.145 "claimed": false, 00:07:05.145 "zoned": false, 00:07:05.145 "supported_io_types": { 00:07:05.145 "read": true, 00:07:05.145 "write": true, 00:07:05.145 "unmap": true, 00:07:05.145 "flush": false, 00:07:05.145 "reset": true, 00:07:05.145 "nvme_admin": false, 00:07:05.145 "nvme_io": false, 00:07:05.145 "nvme_io_md": false, 00:07:05.145 "write_zeroes": true, 00:07:05.145 "zcopy": false, 00:07:05.145 "get_zone_info": false, 00:07:05.145 "zone_management": false, 00:07:05.145 "zone_append": false, 00:07:05.145 "compare": false, 00:07:05.145 "compare_and_write": false, 00:07:05.145 "abort": false, 00:07:05.145 "seek_hole": true, 00:07:05.145 "seek_data": true, 00:07:05.145 "copy": false, 00:07:05.145 "nvme_iov_md": false 00:07:05.145 }, 00:07:05.145 "driver_specific": { 00:07:05.145 "lvol": { 00:07:05.145 "lvol_store_uuid": "1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b", 00:07:05.145 "base_bdev": "aio_bdev", 00:07:05.145 "thin_provision": false, 00:07:05.145 "num_allocated_clusters": 38, 00:07:05.145 "snapshot": false, 00:07:05.145 "clone": false, 00:07:05.145 "esnap_clone": false 00:07:05.145 } 00:07:05.145 } 00:07:05.145 } 00:07:05.145 ] 00:07:05.145 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:05.145 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:07:05.145 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:05.403 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:05.403 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:07:05.403 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:05.403 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:05.403 20:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40366384-8136-4734-90eb-58904488eb90 00:07:05.969 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cb59fa4-4a45-4427-8aac-9cdedf4d4b3b 00:07:05.969 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.535 00:07:06.535 real 0m18.323s 00:07:06.535 user 0m17.970s 00:07:06.535 sys 0m1.924s 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:06.535 ************************************ 00:07:06.535 END TEST lvs_grow_clean 00:07:06.535 ************************************ 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.535 ************************************ 00:07:06.535 START TEST lvs_grow_dirty 00:07:06.535 ************************************ 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.535 20:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.793 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:06.793 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:07.051 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6f923708-91ef-415d-a596-c9f95fb55de9 00:07:07.051 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:07.051 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:07.309 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:07.309 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:07.309 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f923708-91ef-415d-a596-c9f95fb55de9 lvol 150 00:07:07.566 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=18c1a045-cb05-46c6-b66b-bb9aea22677e 00:07:07.566 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.567 20:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:07.825 [2024-07-24 20:35:03.175484] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:07.825 [2024-07-24 20:35:03.175571] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:07.825 true 00:07:07.825 20:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:07.825 20:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:08.081 20:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:08.081 20:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.339 20:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 18c1a045-cb05-46c6-b66b-bb9aea22677e 00:07:08.597 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.855 [2024-07-24 20:35:04.238726] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.855 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1502599 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1502599 /var/tmp/bdevperf.sock 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1502599 ']' 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.114 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.114 [2024-07-24 20:35:04.539330] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:09.114 [2024-07-24 20:35:04.539419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502599 ] 00:07:09.114 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.114 [2024-07-24 20:35:04.600952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.372 [2024-07-24 20:35:04.718944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.372 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.372 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:09.372 20:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:09.629 Nvme0n1 00:07:09.629 20:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:09.887 [ 00:07:09.887 { 00:07:09.887 "name": "Nvme0n1", 00:07:09.887 "aliases": [ 00:07:09.887 "18c1a045-cb05-46c6-b66b-bb9aea22677e" 00:07:09.887 ], 00:07:09.887 "product_name": "NVMe disk", 00:07:09.887 "block_size": 4096, 00:07:09.887 "num_blocks": 38912, 00:07:09.887 "uuid": "18c1a045-cb05-46c6-b66b-bb9aea22677e", 00:07:09.887 "assigned_rate_limits": { 00:07:09.887 "rw_ios_per_sec": 0, 00:07:09.887 "rw_mbytes_per_sec": 0, 00:07:09.887 "r_mbytes_per_sec": 0, 00:07:09.887 "w_mbytes_per_sec": 0 00:07:09.887 }, 00:07:09.887 "claimed": false, 00:07:09.887 "zoned": false, 00:07:09.887 "supported_io_types": { 00:07:09.887 "read": true, 00:07:09.887 "write": true, 00:07:09.887 "unmap": true, 00:07:09.887 "flush": true, 00:07:09.887 "reset": true, 00:07:09.887 "nvme_admin": true, 00:07:09.887 "nvme_io": true, 00:07:09.887 "nvme_io_md": false, 00:07:09.887 "write_zeroes": true, 00:07:09.887 "zcopy": false, 00:07:09.887 "get_zone_info": false, 00:07:09.887 "zone_management": false, 00:07:09.887 "zone_append": false, 00:07:09.887 "compare": true, 00:07:09.887 "compare_and_write": true, 00:07:09.887 "abort": true, 00:07:09.887 "seek_hole": false, 00:07:09.887 "seek_data": false, 00:07:09.887 "copy": true, 00:07:09.887 "nvme_iov_md": false 00:07:09.887 }, 00:07:09.887 "memory_domains": [ 00:07:09.887 { 00:07:09.887 "dma_device_id": "system", 00:07:09.887 "dma_device_type": 1 00:07:09.887 } 00:07:09.887 ], 00:07:09.887 "driver_specific": { 00:07:09.887 "nvme": [ 00:07:09.887 { 00:07:09.887 "trid": { 00:07:09.887 "trtype": "TCP", 00:07:09.887 "adrfam": "IPv4", 00:07:09.887 "traddr": "10.0.0.2", 00:07:09.887 "trsvcid": "4420", 00:07:09.887 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:09.887 }, 00:07:09.887 "ctrlr_data": { 00:07:09.887 "cntlid": 1, 00:07:09.887 "vendor_id": "0x8086", 00:07:09.887 "model_number": "SPDK bdev Controller", 00:07:09.887 "serial_number": "SPDK0", 00:07:09.887 "firmware_revision": "24.09", 00:07:09.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.887 "oacs": { 00:07:09.887 "security": 0, 00:07:09.887 "format": 0, 00:07:09.887 "firmware": 0, 00:07:09.887 "ns_manage": 0 00:07:09.887 }, 00:07:09.887 "multi_ctrlr": true, 00:07:09.887 "ana_reporting": false 00:07:09.887 }, 00:07:09.887 "vs": { 00:07:09.887 "nvme_version": "1.3" 00:07:09.887 }, 00:07:09.887 "ns_data": { 00:07:09.887 "id": 1, 00:07:09.887 "can_share": true 00:07:09.887 } 00:07:09.887 } 00:07:09.887 ], 00:07:09.887 "mp_policy": "active_passive" 00:07:09.887 } 00:07:09.887 } 00:07:09.887 ] 00:07:09.887 20:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1502734 00:07:09.887 20:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:09.887 20:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:10.145 Running I/O for 10 seconds... 00:07:11.077 Latency(us) 00:07:11.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.077 Nvme0n1 : 1.00 14352.00 56.06 0.00 0.00 0.00 0.00 0.00 00:07:11.077 =================================================================================================================== 00:07:11.077 Total : 14352.00 56.06 0.00 0.00 0.00 0.00 0.00 00:07:11.077 00:07:12.010 20:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:12.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.010 Nvme0n1 : 2.00 14525.50 56.74 0.00 0.00 0.00 0.00 0.00 00:07:12.010 =================================================================================================================== 00:07:12.010 Total : 14525.50 56.74 0.00 0.00 0.00 0.00 0.00 00:07:12.010 00:07:12.268 true 00:07:12.268 20:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:12.268 20:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:12.526 20:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:12.526 20:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:12.526 20:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1502734 00:07:13.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.091 Nvme0n1 : 3.00 14615.33 57.09 0.00 0.00 0.00 0.00 0.00 00:07:13.091 =================================================================================================================== 00:07:13.092 Total : 14615.33 57.09 0.00 0.00 0.00 0.00 0.00 00:07:13.092 00:07:14.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.025 Nvme0n1 : 4.00 14708.75 57.46 0.00 0.00 0.00 0.00 0.00 00:07:14.025 =================================================================================================================== 00:07:14.025 Total : 14708.75 57.46 0.00 0.00 0.00 0.00 0.00 00:07:14.025 00:07:15.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.396 Nvme0n1 : 5.00 14764.20 57.67 0.00 0.00 0.00 0.00 0.00 00:07:15.396 =================================================================================================================== 00:07:15.397 Total : 14764.20 57.67 0.00 0.00 0.00 0.00 0.00 00:07:15.397 00:07:16.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.329 Nvme0n1 : 6.00 14801.17 57.82 0.00 0.00 0.00 0.00 0.00 00:07:16.329 =================================================================================================================== 00:07:16.329 Total : 14801.17 57.82 0.00 0.00 0.00 0.00 0.00 00:07:16.329 00:07:17.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.261 Nvme0n1 : 7.00 14845.71 57.99 0.00 0.00 0.00 0.00 0.00 00:07:17.261 =================================================================================================================== 00:07:17.261 Total : 14845.71 57.99 0.00 0.00 0.00 0.00 0.00 00:07:17.261 00:07:18.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.194 Nvme0n1 : 8.00 14863.25 58.06 0.00 0.00 0.00 0.00 0.00 00:07:18.194 =================================================================================================================== 00:07:18.194 Total : 14863.25 58.06 0.00 0.00 0.00 0.00 0.00 00:07:18.194 00:07:19.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.127 Nvme0n1 : 9.00 14891.00 58.17 0.00 0.00 0.00 0.00 0.00 00:07:19.127 =================================================================================================================== 00:07:19.127 Total : 14891.00 58.17 0.00 0.00 0.00 0.00 0.00 00:07:19.127 00:07:20.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.061 Nvme0n1 : 10.00 14921.30 58.29 0.00 0.00 0.00 0.00 0.00 00:07:20.061 =================================================================================================================== 00:07:20.061 Total : 14921.30 58.29 0.00 0.00 0.00 0.00 0.00 00:07:20.061 00:07:20.061 00:07:20.061 Latency(us) 00:07:20.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.061 Nvme0n1 : 10.00 14919.56 58.28 0.00 0.00 8573.43 3301.07 16699.54 00:07:20.061 =================================================================================================================== 00:07:20.062 Total : 14919.56 58.28 0.00 0.00 8573.43 3301.07 16699.54 00:07:20.062 0 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1502599 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1502599 ']' 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1502599 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1502599 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1502599' 00:07:20.062 killing process with pid 1502599 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1502599 00:07:20.062 Received shutdown signal, test time was about 10.000000 seconds 00:07:20.062 00:07:20.062 Latency(us) 00:07:20.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.062 =================================================================================================================== 00:07:20.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:20.062 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1502599 00:07:20.627 20:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.884 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.141 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:21.141 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1499955 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1499955 00:07:21.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1499955 Killed "${NVMF_APP[@]}" "$@" 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1504072 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1504072 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1504072 ']' 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.399 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.399 [2024-07-24 20:35:16.815593] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:21.399 [2024-07-24 20:35:16.815705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.399 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.399 [2024-07-24 20:35:16.887102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.657 [2024-07-24 20:35:17.007259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.657 [2024-07-24 20:35:17.007321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.657 [2024-07-24 20:35:17.007337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.657 [2024-07-24 20:35:17.007351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.657 [2024-07-24 20:35:17.007362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.657 [2024-07-24 20:35:17.007394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.657 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.657 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:21.657 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.657 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:21.657 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.657 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.657 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.916 [2024-07-24 20:35:17.416666] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:21.916 [2024-07-24 20:35:17.416808] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:21.916 [2024-07-24 20:35:17.416866] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 18c1a045-cb05-46c6-b66b-bb9aea22677e 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=18c1a045-cb05-46c6-b66b-bb9aea22677e 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:21.916 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:22.174 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 18c1a045-cb05-46c6-b66b-bb9aea22677e -t 2000 00:07:22.431 [ 00:07:22.431 { 00:07:22.431 "name": "18c1a045-cb05-46c6-b66b-bb9aea22677e", 00:07:22.431 "aliases": [ 00:07:22.431 "lvs/lvol" 00:07:22.431 ], 00:07:22.431 "product_name": "Logical Volume", 00:07:22.431 "block_size": 4096, 00:07:22.431 "num_blocks": 38912, 00:07:22.431 "uuid": "18c1a045-cb05-46c6-b66b-bb9aea22677e", 00:07:22.431 "assigned_rate_limits": { 00:07:22.431 "rw_ios_per_sec": 0, 00:07:22.431 "rw_mbytes_per_sec": 0, 00:07:22.431 "r_mbytes_per_sec": 0, 00:07:22.431 "w_mbytes_per_sec": 0 00:07:22.431 }, 00:07:22.431 "claimed": false, 00:07:22.431 "zoned": false, 00:07:22.431 "supported_io_types": { 00:07:22.431 "read": true, 00:07:22.431 "write": true, 00:07:22.431 "unmap": true, 00:07:22.431 "flush": false, 00:07:22.431 "reset": true, 00:07:22.432 "nvme_admin": false, 00:07:22.432 "nvme_io": false, 00:07:22.432 "nvme_io_md": false, 00:07:22.432 "write_zeroes": true, 00:07:22.432 "zcopy": false, 00:07:22.432 "get_zone_info": false, 00:07:22.432 "zone_management": false, 00:07:22.432 "zone_append": false, 00:07:22.432 "compare": false, 00:07:22.432 "compare_and_write": false, 00:07:22.432 "abort": false, 00:07:22.432 "seek_hole": true, 00:07:22.432 "seek_data": true, 00:07:22.432 "copy": false, 00:07:22.432 "nvme_iov_md": false 00:07:22.432 }, 00:07:22.432 "driver_specific": { 00:07:22.432 "lvol": { 00:07:22.432 "lvol_store_uuid": "6f923708-91ef-415d-a596-c9f95fb55de9", 00:07:22.432 "base_bdev": "aio_bdev", 00:07:22.432 "thin_provision": false, 00:07:22.432 "num_allocated_clusters": 38, 00:07:22.432 "snapshot": false, 00:07:22.432 "clone": false, 00:07:22.432 "esnap_clone": false 00:07:22.432 } 00:07:22.432 } 00:07:22.432 } 00:07:22.432 ] 00:07:22.432 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:22.432 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:22.432 20:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:22.689 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:22.689 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:22.689 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:22.946 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:22.946 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.204 [2024-07-24 20:35:18.741823] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:23.524 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:23.524 request: 00:07:23.524 { 00:07:23.524 "uuid": "6f923708-91ef-415d-a596-c9f95fb55de9", 00:07:23.524 "method": "bdev_lvol_get_lvstores", 00:07:23.524 "req_id": 1 00:07:23.524 } 00:07:23.524 Got JSON-RPC error response 00:07:23.524 response: 00:07:23.524 { 00:07:23.524 "code": -19, 00:07:23.524 "message": "No such device" 00:07:23.524 } 00:07:23.524 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:23.524 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.524 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.524 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.524 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.090 aio_bdev 00:07:24.090 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 18c1a045-cb05-46c6-b66b-bb9aea22677e 00:07:24.090 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=18c1a045-cb05-46c6-b66b-bb9aea22677e 00:07:24.090 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:24.090 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:24.090 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:24.090 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:24.090 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:24.346 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 18c1a045-cb05-46c6-b66b-bb9aea22677e -t 2000 00:07:24.604 [ 00:07:24.604 { 00:07:24.604 "name": "18c1a045-cb05-46c6-b66b-bb9aea22677e", 00:07:24.604 "aliases": [ 00:07:24.604 "lvs/lvol" 00:07:24.604 ], 00:07:24.604 "product_name": "Logical Volume", 00:07:24.604 "block_size": 4096, 00:07:24.604 "num_blocks": 38912, 00:07:24.604 "uuid": "18c1a045-cb05-46c6-b66b-bb9aea22677e", 00:07:24.604 "assigned_rate_limits": { 00:07:24.604 "rw_ios_per_sec": 0, 00:07:24.604 "rw_mbytes_per_sec": 0, 00:07:24.604 "r_mbytes_per_sec": 0, 00:07:24.604 "w_mbytes_per_sec": 0 00:07:24.604 }, 00:07:24.604 "claimed": false, 00:07:24.604 "zoned": false, 00:07:24.604 "supported_io_types": { 00:07:24.604 "read": true, 00:07:24.604 "write": true, 00:07:24.604 "unmap": true, 00:07:24.604 "flush": false, 00:07:24.604 "reset": true, 00:07:24.604 "nvme_admin": false, 00:07:24.604 "nvme_io": false, 00:07:24.604 "nvme_io_md": false, 00:07:24.604 "write_zeroes": true, 00:07:24.604 "zcopy": false, 00:07:24.604 "get_zone_info": false, 00:07:24.604 "zone_management": false, 00:07:24.604 "zone_append": false, 00:07:24.604 "compare": false, 00:07:24.604 "compare_and_write": false, 00:07:24.604 "abort": false, 00:07:24.604 "seek_hole": true, 00:07:24.604 "seek_data": true, 00:07:24.604 "copy": false, 00:07:24.604 "nvme_iov_md": false 00:07:24.604 }, 00:07:24.604 "driver_specific": { 00:07:24.604 "lvol": { 00:07:24.604 "lvol_store_uuid": "6f923708-91ef-415d-a596-c9f95fb55de9", 00:07:24.604 "base_bdev": "aio_bdev", 00:07:24.604 "thin_provision": false, 00:07:24.604 "num_allocated_clusters": 38, 00:07:24.604 "snapshot": false, 00:07:24.604 "clone": false, 00:07:24.604 "esnap_clone": false 00:07:24.604 } 00:07:24.604 } 00:07:24.604 } 00:07:24.604 ] 00:07:24.604 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:24.604 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:24.604 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:24.860 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:24.860 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:24.860 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:25.117 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:25.117 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 18c1a045-cb05-46c6-b66b-bb9aea22677e 00:07:25.374 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f923708-91ef-415d-a596-c9f95fb55de9 00:07:25.631 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.888 00:07:25.888 real 0m19.469s 00:07:25.888 user 0m49.427s 00:07:25.888 sys 0m4.895s 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.888 ************************************ 00:07:25.888 END TEST lvs_grow_dirty 00:07:25.888 ************************************ 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:25.888 nvmf_trace.0 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.888 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.888 rmmod nvme_tcp 00:07:25.888 rmmod nvme_fabrics 00:07:25.888 rmmod nvme_keyring 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1504072 ']' 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1504072 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1504072 ']' 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1504072 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1504072 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1504072' 00:07:26.145 killing process with pid 1504072 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1504072 00:07:26.145 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1504072 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.403 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:28.312 00:07:28.312 real 0m43.194s 00:07:28.312 user 1m13.448s 00:07:28.312 sys 0m8.727s 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.312 ************************************ 00:07:28.312 END TEST nvmf_lvs_grow 00:07:28.312 ************************************ 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.312 ************************************ 00:07:28.312 START TEST nvmf_bdev_io_wait 00:07:28.312 ************************************ 00:07:28.312 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:28.568 * Looking for test storage... 00:07:28.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.568 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:07:28.569 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.467 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:30.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:30.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:30.468 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:30.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:30.468 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.468 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.468 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.468 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:30.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:07:30.468 00:07:30.468 --- 10.0.0.2 ping statistics --- 00:07:30.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.468 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:30.468 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:07:30.726 00:07:30.726 --- 10.0.0.1 ping statistics --- 00:07:30.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.726 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1506596 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1506596 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1506596 ']' 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.726 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.726 [2024-07-24 20:35:26.116712] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:30.726 [2024-07-24 20:35:26.116796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.726 [2024-07-24 20:35:26.189419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.984 [2024-07-24 20:35:26.312137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.984 [2024-07-24 20:35:26.312189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.984 [2024-07-24 20:35:26.312206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.984 [2024-07-24 20:35:26.312219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.984 [2024-07-24 20:35:26.312231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.984 [2024-07-24 20:35:26.316267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.984 [2024-07-24 20:35:26.316318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.984 [2024-07-24 20:35:26.316365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.984 [2024-07-24 20:35:26.316370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.984 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.984 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:30.984 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.984 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.984 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.984 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.984 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.985 [2024-07-24 20:35:26.453040] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.985 Malloc0 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.985 [2024-07-24 20:35:26.515564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1506744 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1506746 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1506748 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:30.985 { 00:07:30.985 "params": { 00:07:30.985 "name": "Nvme$subsystem", 00:07:30.985 "trtype": "$TEST_TRANSPORT", 00:07:30.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.985 "adrfam": "ipv4", 00:07:30.985 "trsvcid": "$NVMF_PORT", 00:07:30.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.985 "hdgst": ${hdgst:-false}, 00:07:30.985 "ddgst": ${ddgst:-false} 00:07:30.985 }, 00:07:30.985 "method": "bdev_nvme_attach_controller" 00:07:30.985 } 00:07:30.985 EOF 00:07:30.985 )") 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1506750 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:30.985 { 00:07:30.985 "params": { 00:07:30.985 "name": "Nvme$subsystem", 00:07:30.985 "trtype": "$TEST_TRANSPORT", 00:07:30.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.985 "adrfam": "ipv4", 00:07:30.985 "trsvcid": "$NVMF_PORT", 00:07:30.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.985 "hdgst": ${hdgst:-false}, 00:07:30.985 "ddgst": ${ddgst:-false} 00:07:30.985 }, 00:07:30.985 "method": "bdev_nvme_attach_controller" 00:07:30.985 } 00:07:30.985 EOF 00:07:30.985 )") 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:30.985 { 00:07:30.985 "params": { 00:07:30.985 "name": "Nvme$subsystem", 00:07:30.985 "trtype": "$TEST_TRANSPORT", 00:07:30.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.985 "adrfam": "ipv4", 00:07:30.985 "trsvcid": "$NVMF_PORT", 00:07:30.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.985 "hdgst": ${hdgst:-false}, 00:07:30.985 "ddgst": ${ddgst:-false} 00:07:30.985 }, 00:07:30.985 "method": "bdev_nvme_attach_controller" 00:07:30.985 } 00:07:30.985 EOF 00:07:30.985 )") 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:30.985 { 00:07:30.985 "params": { 00:07:30.985 "name": "Nvme$subsystem", 00:07:30.985 "trtype": "$TEST_TRANSPORT", 00:07:30.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.985 "adrfam": "ipv4", 00:07:30.985 "trsvcid": "$NVMF_PORT", 00:07:30.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.985 "hdgst": ${hdgst:-false}, 00:07:30.985 "ddgst": ${ddgst:-false} 00:07:30.985 }, 00:07:30.985 "method": "bdev_nvme_attach_controller" 00:07:30.985 } 00:07:30.985 EOF 00:07:30.985 )") 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1506744 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:30.985 "params": { 00:07:30.985 "name": "Nvme1", 00:07:30.985 "trtype": "tcp", 00:07:30.985 "traddr": "10.0.0.2", 00:07:30.985 "adrfam": "ipv4", 00:07:30.985 "trsvcid": "4420", 00:07:30.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.985 "hdgst": false, 00:07:30.985 "ddgst": false 00:07:30.985 }, 00:07:30.985 "method": "bdev_nvme_attach_controller" 00:07:30.985 }' 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:30.985 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:30.986 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:30.986 "params": { 00:07:30.986 "name": "Nvme1", 00:07:30.986 "trtype": "tcp", 00:07:30.986 "traddr": "10.0.0.2", 00:07:30.986 "adrfam": "ipv4", 00:07:30.986 "trsvcid": "4420", 00:07:30.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.986 "hdgst": false, 00:07:30.986 "ddgst": false 00:07:30.986 }, 00:07:30.986 "method": "bdev_nvme_attach_controller" 00:07:30.986 }' 00:07:30.986 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:30.986 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:30.986 "params": { 00:07:30.986 "name": "Nvme1", 00:07:30.986 "trtype": "tcp", 00:07:30.986 "traddr": "10.0.0.2", 00:07:30.986 "adrfam": "ipv4", 00:07:30.986 "trsvcid": "4420", 00:07:30.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.986 "hdgst": false, 00:07:30.986 "ddgst": false 00:07:30.986 }, 00:07:30.986 "method": "bdev_nvme_attach_controller" 00:07:30.986 }' 00:07:30.986 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:30.986 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:30.986 "params": { 00:07:30.986 "name": "Nvme1", 00:07:30.986 "trtype": "tcp", 00:07:30.986 "traddr": "10.0.0.2", 00:07:30.986 "adrfam": "ipv4", 00:07:30.986 "trsvcid": "4420", 00:07:30.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.986 "hdgst": false, 00:07:30.986 "ddgst": false 00:07:30.986 }, 00:07:30.986 "method": "bdev_nvme_attach_controller" 00:07:30.986 }' 00:07:31.243 [2024-07-24 20:35:26.564200] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:31.243 [2024-07-24 20:35:26.564200] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:31.243 [2024-07-24 20:35:26.564200] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:31.243 [2024-07-24 20:35:26.564201] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:31.243 [2024-07-24 20:35:26.564313] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 20:35:26.564316] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 20:35:26.564312] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 20:35:26.564313] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:31.243 --proc-type=auto ] 00:07:31.243 --proc-type=auto ] 00:07:31.243 --proc-type=auto ] 00:07:31.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.243 [2024-07-24 20:35:26.734018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.500 [2024-07-24 20:35:26.832518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.500 [2024-07-24 20:35:26.835839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.501 [2024-07-24 20:35:26.935504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:31.501 [2024-07-24 20:35:26.936997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.501 [2024-07-24 20:35:27.012929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.501 [2024-07-24 20:35:27.038916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:31.758 [2024-07-24 20:35:27.107508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:31.758 Running I/O for 1 seconds... 00:07:31.758 Running I/O for 1 seconds... 00:07:31.758 Running I/O for 1 seconds... 00:07:32.016 Running I/O for 1 seconds... 00:07:32.949 00:07:32.949 Latency(us) 00:07:32.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.949 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:32.949 Nvme1n1 : 1.00 191012.00 746.14 0.00 0.00 667.46 280.65 904.15 00:07:32.949 =================================================================================================================== 00:07:32.949 Total : 191012.00 746.14 0.00 0.00 667.46 280.65 904.15 00:07:32.949 00:07:32.949 Latency(us) 00:07:32.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.949 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:32.949 Nvme1n1 : 1.02 7501.74 29.30 0.00 0.00 16954.43 8786.68 28738.75 00:07:32.949 =================================================================================================================== 00:07:32.949 Total : 7501.74 29.30 0.00 0.00 16954.43 8786.68 28738.75 00:07:32.949 00:07:32.949 Latency(us) 00:07:32.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.949 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:32.949 Nvme1n1 : 1.01 9230.48 36.06 0.00 0.00 13797.84 8980.86 25243.50 00:07:32.949 =================================================================================================================== 00:07:32.949 Total : 9230.48 36.06 0.00 0.00 13797.84 8980.86 25243.50 00:07:32.949 00:07:32.949 Latency(us) 00:07:32.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.949 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:32.949 Nvme1n1 : 1.00 7891.06 30.82 0.00 0.00 16173.20 4757.43 41360.50 00:07:32.949 =================================================================================================================== 00:07:32.949 Total : 7891.06 30.82 0.00 0.00 16173.20 4757.43 41360.50 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1506746 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1506748 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1506750 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.206 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.206 rmmod nvme_tcp 00:07:33.463 rmmod nvme_fabrics 00:07:33.463 rmmod nvme_keyring 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1506596 ']' 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1506596 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1506596 ']' 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1506596 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506596 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.463 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.464 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506596' 00:07:33.464 killing process with pid 1506596 00:07:33.464 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1506596 00:07:33.464 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1506596 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.722 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.627 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.627 00:07:35.627 real 0m7.271s 00:07:35.627 user 0m16.990s 00:07:35.627 sys 0m3.558s 00:07:35.627 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.627 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.627 ************************************ 00:07:35.627 END TEST nvmf_bdev_io_wait 00:07:35.627 ************************************ 00:07:35.627 20:35:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:35.627 20:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.627 20:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.627 20:35:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.886 ************************************ 00:07:35.886 START TEST nvmf_queue_depth 00:07:35.886 ************************************ 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:35.886 * Looking for test storage... 00:07:35.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.886 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.887 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:07:37.786 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.787 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.787 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.787 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.787 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.787 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:07:38.045 00:07:38.045 --- 10.0.0.2 ping statistics --- 00:07:38.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.045 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:07:38.045 00:07:38.045 --- 10.0.0.1 ping statistics --- 00:07:38.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.045 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1508964 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1508964 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1508964 ']' 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.045 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.045 [2024-07-24 20:35:33.499517] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:38.045 [2024-07-24 20:35:33.499605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.045 [2024-07-24 20:35:33.566826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.303 [2024-07-24 20:35:33.686813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.303 [2024-07-24 20:35:33.686869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.303 [2024-07-24 20:35:33.686896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.303 [2024-07-24 20:35:33.686909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.303 [2024-07-24 20:35:33.686921] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.303 [2024-07-24 20:35:33.686951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 [2024-07-24 20:35:34.470166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 Malloc0 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 [2024-07-24 20:35:34.528876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1509118 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1509118 /var/tmp/bdevperf.sock 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1509118 ']' 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:39.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.236 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.236 [2024-07-24 20:35:34.575679] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:39.236 [2024-07-24 20:35:34.575742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509118 ] 00:07:39.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.236 [2024-07-24 20:35:34.637358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.236 [2024-07-24 20:35:34.753407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.494 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.494 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:39.494 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:39.494 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.494 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.494 NVMe0n1 00:07:39.494 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.494 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:39.751 Running I/O for 10 seconds... 00:07:49.713 00:07:49.713 Latency(us) 00:07:49.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.713 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:49.713 Verification LBA range: start 0x0 length 0x4000 00:07:49.713 NVMe0n1 : 10.10 8402.75 32.82 0.00 0.00 121360.57 24369.68 74565.40 00:07:49.713 =================================================================================================================== 00:07:49.713 Total : 8402.75 32.82 0.00 0.00 121360.57 24369.68 74565.40 00:07:49.713 0 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1509118 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1509118 ']' 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1509118 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1509118 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1509118' 00:07:49.713 killing process with pid 1509118 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1509118 00:07:49.713 Received shutdown signal, test time was about 10.000000 seconds 00:07:49.713 00:07:49.713 Latency(us) 00:07:49.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.713 =================================================================================================================== 00:07:49.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:49.713 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1509118 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.970 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.970 rmmod nvme_tcp 00:07:49.970 rmmod nvme_fabrics 00:07:50.227 rmmod nvme_keyring 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1508964 ']' 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1508964 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1508964 ']' 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1508964 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508964 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508964' 00:07:50.227 killing process with pid 1508964 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1508964 00:07:50.227 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1508964 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.485 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.012 20:35:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.012 00:07:53.012 real 0m16.764s 00:07:53.012 user 0m23.499s 00:07:53.012 sys 0m3.079s 00:07:53.012 20:35:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.012 20:35:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.012 ************************************ 00:07:53.012 END TEST nvmf_queue_depth 00:07:53.012 ************************************ 00:07:53.012 20:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:53.012 20:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.012 20:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.012 20:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.012 ************************************ 00:07:53.012 START TEST nvmf_target_multipath 00:07:53.012 ************************************ 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:53.012 * Looking for test storage... 00:07:53.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.012 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.013 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:54.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:54.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:54.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:54.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:07:54.914 00:07:54.914 --- 10.0.0.2 ping statistics --- 00:07:54.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.914 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:54.914 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:07:54.914 00:07:54.914 --- 10.0.0.1 ping statistics --- 00:07:54.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.914 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:54.915 only one NIC for nvmf test 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.915 rmmod nvme_tcp 00:07:54.915 rmmod nvme_fabrics 00:07:54.915 rmmod nvme_keyring 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.915 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.879 00:07:56.879 real 0m4.347s 00:07:56.879 user 0m0.813s 00:07:56.879 sys 0m1.517s 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 ************************************ 00:07:56.879 END TEST nvmf_target_multipath 00:07:56.879 ************************************ 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 ************************************ 00:07:56.879 START TEST nvmf_zcopy 00:07:56.879 ************************************ 00:07:56.879 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:57.136 * Looking for test storage... 00:07:57.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.137 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.034 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:59.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:59.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:59.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:59.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.035 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.293 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.293 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.293 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:07:59.293 00:07:59.293 --- 10.0.0.2 ping statistics --- 00:07:59.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.293 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:59.293 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:07:59.293 00:07:59.293 --- 10.0.0.1 ping statistics --- 00:07:59.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.293 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:59.293 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1514320 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1514320 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1514320 ']' 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.294 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:59.294 [2024-07-24 20:35:54.706513] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:07:59.294 [2024-07-24 20:35:54.706620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.294 [2024-07-24 20:35:54.775837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.552 [2024-07-24 20:35:54.892900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.552 [2024-07-24 20:35:54.892952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.552 [2024-07-24 20:35:54.892976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.552 [2024-07-24 20:35:54.892990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.552 [2024-07-24 20:35:54.893001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.552 [2024-07-24 20:35:54.893031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.115 [2024-07-24 20:35:55.672903] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.115 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.373 [2024-07-24 20:35:55.689065] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.373 malloc0 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:00.373 { 00:08:00.373 "params": { 00:08:00.373 "name": "Nvme$subsystem", 00:08:00.373 "trtype": "$TEST_TRANSPORT", 00:08:00.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.373 "adrfam": "ipv4", 00:08:00.373 "trsvcid": "$NVMF_PORT", 00:08:00.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.373 "hdgst": ${hdgst:-false}, 00:08:00.373 "ddgst": ${ddgst:-false} 00:08:00.373 }, 00:08:00.373 "method": "bdev_nvme_attach_controller" 00:08:00.373 } 00:08:00.373 EOF 00:08:00.373 )") 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:00.373 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:00.373 "params": { 00:08:00.373 "name": "Nvme1", 00:08:00.373 "trtype": "tcp", 00:08:00.373 "traddr": "10.0.0.2", 00:08:00.373 "adrfam": "ipv4", 00:08:00.373 "trsvcid": "4420", 00:08:00.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.373 "hdgst": false, 00:08:00.373 "ddgst": false 00:08:00.373 }, 00:08:00.373 "method": "bdev_nvme_attach_controller" 00:08:00.373 }' 00:08:00.373 [2024-07-24 20:35:55.780470] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:08:00.373 [2024-07-24 20:35:55.780575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514429 ] 00:08:00.373 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.373 [2024-07-24 20:35:55.844733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.688 [2024-07-24 20:35:55.965090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.688 Running I/O for 10 seconds... 00:08:12.883 00:08:12.883 Latency(us) 00:08:12.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.883 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:12.883 Verification LBA range: start 0x0 length 0x1000 00:08:12.883 Nvme1n1 : 10.02 5638.36 44.05 0.00 0.00 22637.43 3786.52 34564.17 00:08:12.883 =================================================================================================================== 00:08:12.883 Total : 5638.36 44.05 0.00 0.00 22637.43 3786.52 34564.17 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1515786 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:12.883 { 00:08:12.883 "params": { 00:08:12.883 "name": "Nvme$subsystem", 00:08:12.883 "trtype": "$TEST_TRANSPORT", 00:08:12.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.883 "adrfam": "ipv4", 00:08:12.883 "trsvcid": "$NVMF_PORT", 00:08:12.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.883 "hdgst": ${hdgst:-false}, 00:08:12.883 "ddgst": ${ddgst:-false} 00:08:12.883 }, 00:08:12.883 "method": "bdev_nvme_attach_controller" 00:08:12.883 } 00:08:12.883 EOF 00:08:12.883 )") 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:12.883 [2024-07-24 20:36:06.518473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.518517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:12.883 20:36:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:12.883 "params": { 00:08:12.883 "name": "Nvme1", 00:08:12.883 "trtype": "tcp", 00:08:12.883 "traddr": "10.0.0.2", 00:08:12.883 "adrfam": "ipv4", 00:08:12.883 "trsvcid": "4420", 00:08:12.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.883 "hdgst": false, 00:08:12.883 "ddgst": false 00:08:12.883 }, 00:08:12.883 "method": "bdev_nvme_attach_controller" 00:08:12.883 }' 00:08:12.883 [2024-07-24 20:36:06.526420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.526445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.534440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.534462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.542458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.542479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.550478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.550500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.554969] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:08:12.883 [2024-07-24 20:36:06.555030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515786 ] 00:08:12.883 [2024-07-24 20:36:06.558501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.558537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.566538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.566559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.574557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.574578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.582601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.582622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.883 [2024-07-24 20:36:06.590619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.590644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.883 [2024-07-24 20:36:06.598639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.883 [2024-07-24 20:36:06.598664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.606659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.606685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.614682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.614707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.619992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.884 [2024-07-24 20:36:06.622705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.622731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.630764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.630803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.638756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.638786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.646771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.646806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.654792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.654818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.662812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.662838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.670836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.670861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.678863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.678889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.686909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.686946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.694931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.694968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.702928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.702955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.710949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.710975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.718969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.718994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.726993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.727019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.735013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.735038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.740884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.884 [2024-07-24 20:36:06.743036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.743061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.751060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.751086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.759112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.759150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.767137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.767178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.775160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.775211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.783186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.783227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.791201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.791254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.799225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.799274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.807253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.807308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.815237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.815271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.823310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.823347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.831335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.831371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.839326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.839352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.847337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.847359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.855351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.855376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.863386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.863412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.871414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.871440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.879430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.879456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.887449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.887474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.895471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.895496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.903491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.903514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.911512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.911551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.919549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.919570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.927567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.927589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.935604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.935625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.943626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.943649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.951631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.951653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.959645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.959665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.967669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.967690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.975694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.975716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.983712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.983733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.991744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.991768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:06.999756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:06.999778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:07.007781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:07.007802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:07.015804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:07.015824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.884 [2024-07-24 20:36:07.023826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.884 [2024-07-24 20:36:07.023847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.031850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.031871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.039870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.039891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.047932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.047957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 Running I/O for 5 seconds... 00:08:12.885 [2024-07-24 20:36:07.055946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.055972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.064224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.064261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.079330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.079360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.091298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.091326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.103786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.103816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.116313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.116349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.129304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.129335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.142726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.142758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.155812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.155843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.168810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.168842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.182494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.182523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.194971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.195003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.207095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.207126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.219791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.219824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.233000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.233033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.245741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.245772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.258011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.258043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.270663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.270695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.283151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.283183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.296012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.296044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.309344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.309373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.322523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.322552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.335921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.335954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.348340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.348368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.361914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.361957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.375036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.375068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.388420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.388449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.401876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.401908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.414812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.414846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.427937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.427968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.441178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.441209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.454727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.454761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.467932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.467972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.481444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.481472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.494173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.494204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.507143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.507174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.520128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.520159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.533329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.533358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.546659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.546691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.560320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.560347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.573719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.573750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.587297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.587325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.600490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.600519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.613416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.613452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.626258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.626303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.639211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.639251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.652497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.652525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.665326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.665353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.678727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.678758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.691993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.885 [2024-07-24 20:36:07.692025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.885 [2024-07-24 20:36:07.705317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.705344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.718119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.718150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.730948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.730979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.744145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.744176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.756892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.756923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.769620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.769651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.782591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.782622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.795648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.795678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.808715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.808746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.822367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.822394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.835309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.835337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.848033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.848064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.860765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.860803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.873663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.873693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.886679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.886710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.900357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.900384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.912656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.912687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.925633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.925664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.938809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.938840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.952430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.952458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.965451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.965479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.978602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.978633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:07.991386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:07.991414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.003849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.003879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.016238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.016297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.028460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.028487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.041401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.041429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.054164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.054194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.066864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.066896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.080100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.080131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.092636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.092667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.105652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.105691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.118645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.118676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.131417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.131446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.144576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.144607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.157481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.157509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.170313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.170341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.183033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.183064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.196224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.196278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.209349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.209377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.222715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.222743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.235065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.235093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.247436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.247464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.259419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.259446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.272084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.272115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.284821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.284852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.297731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.297762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.310525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.310553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.323178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.323209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.335820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.335851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.348764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.348796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.360790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.360821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.373681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.373711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.387059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.886 [2024-07-24 20:36:08.387092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.886 [2024-07-24 20:36:08.400363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.887 [2024-07-24 20:36:08.400392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.887 [2024-07-24 20:36:08.413098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.887 [2024-07-24 20:36:08.413129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.887 [2024-07-24 20:36:08.425813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.887 [2024-07-24 20:36:08.425843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.887 [2024-07-24 20:36:08.437903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.887 [2024-07-24 20:36:08.437935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.451138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.451169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.463989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.464019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.477619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.477651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.490258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.490303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.503454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.503481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.516344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.516372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.529215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.529255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.541801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.541832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.554324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.554353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.567625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.567656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.580489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.580517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.593668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.593699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.607031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.607062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.620222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.620263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.633649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.633689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.645846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.645878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.658710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.658741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.671642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.671673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.685082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.144 [2024-07-24 20:36:08.685113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.144 [2024-07-24 20:36:08.697888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.145 [2024-07-24 20:36:08.697918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.145 [2024-07-24 20:36:08.711312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.145 [2024-07-24 20:36:08.711340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.724324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.724352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.737180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.737211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.750474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.750502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.763150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.763181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.776142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.776172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.789231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.789287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.801691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.801722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.814416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.814444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.826924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.826955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.839370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.839398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.851794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.851825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.864534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.864562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.877388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.877416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.890150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.890180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.903228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.903285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.916654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.916685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.929391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.929419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.942350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.942385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.955370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.955398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.402 [2024-07-24 20:36:08.968059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.402 [2024-07-24 20:36:08.968100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:08.981297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:08.981337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:08.994265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:08.994308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.006901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.006932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.019676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.019707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.032806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.032837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.045791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.045821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.058859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.058890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.071705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.071743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.084689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.084721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.097666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.097696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.110595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.110626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.122890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.122921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.135453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.135481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.148130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.148160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.161394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.161422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.174395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.174423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.187514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.187555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.200468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.200498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.213627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.213658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.660 [2024-07-24 20:36:09.226509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.660 [2024-07-24 20:36:09.226537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.238982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.239013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.251791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.251822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.264478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.264506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.277132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.277162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.288859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.288887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.300646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.300674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.313314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.313348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.326373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.326401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.338728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.338760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.351665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.351697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.364436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.364464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.918 [2024-07-24 20:36:09.377581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.918 [2024-07-24 20:36:09.377611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.390166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.390197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.402503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.402553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.414350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.414377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.427332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.427359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.440460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.440488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.453519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.453547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.465478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.465505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.919 [2024-07-24 20:36:09.477139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.919 [2024-07-24 20:36:09.477165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.489904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.489931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.502336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.502363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.514670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.514697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.527285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.527312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.540713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.540740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.551599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.551633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.563699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.563726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.575776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.575802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.588146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.588172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.600976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.601002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.612592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.612620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.624597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.624624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.636897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.636924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.649119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.649146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.661333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.661361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.673771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.673802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.687077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.687108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.700464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.700492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.713465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.713493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.726716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.726747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.177 [2024-07-24 20:36:09.740053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.177 [2024-07-24 20:36:09.740083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.753674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.753705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.766826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.766856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.780285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.780313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.794021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.794062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.806826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.806856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.820459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.820487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.833891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.833922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.846983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.847013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.859954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.859985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.872570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.872601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.885691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.885722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.898767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.898798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.911386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.911414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.924135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.924166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.936332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.936359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.948760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.948790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.961420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.961447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.435 [2024-07-24 20:36:09.973222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.435 [2024-07-24 20:36:09.973265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.436 [2024-07-24 20:36:09.986492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.436 [2024-07-24 20:36:09.986520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.436 [2024-07-24 20:36:09.999464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.436 [2024-07-24 20:36:09.999491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.011916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.011948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.029655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.029698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.044261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.044320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.058835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.058871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.072770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.072805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.086625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.086657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.100756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.100787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.114951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.114984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.129875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.129920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.143556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.143584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.155482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.155510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.167962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.168002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.180357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.180384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.193363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.193391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.206521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.206566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.219647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.219678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.232419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.232447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.245302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.245330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.694 [2024-07-24 20:36:10.258700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.694 [2024-07-24 20:36:10.258736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.271938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.271969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.284997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.285027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.297797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.297828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.310760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.310791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.324263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.324305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.337239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.337282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.350290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.350318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.363538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.952 [2024-07-24 20:36:10.363584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.952 [2024-07-24 20:36:10.377013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.377045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.390053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.390082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.403397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.403425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.417492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.417520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.431611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.431642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.444217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.444258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.457209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.457251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.470255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.470286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.483777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.483808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.497356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.497384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.953 [2024-07-24 20:36:10.511198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.953 [2024-07-24 20:36:10.511229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.525033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.525063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.538418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.538445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.551928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.551959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.564730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.564762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.578444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.578472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.591477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.591505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.604751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.604783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.617631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.617661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.630392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.630419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.643369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.643396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.656932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.656962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.670607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.670634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.682982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.683013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.695128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.695154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.707121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.707148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.718984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.719010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.730958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.730985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.743606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.743632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.755863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.755890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.211 [2024-07-24 20:36:10.767853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.211 [2024-07-24 20:36:10.767879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.779611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.779638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.792362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.792389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.804648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.804675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.816942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.816969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.829476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.829518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.841608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.841635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.853831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.853858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.866005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.866032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.878701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.878727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.890813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.469 [2024-07-24 20:36:10.890839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.469 [2024-07-24 20:36:10.902478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.902515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:10.914298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.914326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:10.926464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.926492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:10.938693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.938720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:10.951172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.951199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:10.963571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.963598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:10.976564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.976595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:10.989524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:10.989552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:11.002614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:11.002644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:11.015063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:11.015108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.470 [2024-07-24 20:36:11.027875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.470 [2024-07-24 20:36:11.027906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.040779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.040810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.053513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.053558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.066137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.066167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.079313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.079356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.092115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.092145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.105038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.105069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.118135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.118166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.130974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.131005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.144690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.144721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.157387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.157415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.170135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.170166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.182556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.182587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.727 [2024-07-24 20:36:11.195649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.727 [2024-07-24 20:36:11.195680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.728 [2024-07-24 20:36:11.208143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.728 [2024-07-24 20:36:11.208173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.728 [2024-07-24 20:36:11.221068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.728 [2024-07-24 20:36:11.221098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.728 [2024-07-24 20:36:11.234162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.728 [2024-07-24 20:36:11.234193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.728 [2024-07-24 20:36:11.246881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.728 [2024-07-24 20:36:11.246912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.728 [2024-07-24 20:36:11.260190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.728 [2024-07-24 20:36:11.260230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.728 [2024-07-24 20:36:11.272970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.728 [2024-07-24 20:36:11.273001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.728 [2024-07-24 20:36:11.286413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.728 [2024-07-24 20:36:11.286441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.299828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.985 [2024-07-24 20:36:11.299859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.313213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.985 [2024-07-24 20:36:11.313252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.326415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.985 [2024-07-24 20:36:11.326443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.339546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.985 [2024-07-24 20:36:11.339572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.352410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.985 [2024-07-24 20:36:11.352438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.365453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.985 [2024-07-24 20:36:11.365495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.378128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.985 [2024-07-24 20:36:11.378159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.985 [2024-07-24 20:36:11.390768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.390799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.403597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.403629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.416795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.416826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.429420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.429447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.441614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.441645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.454079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.454110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.467308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.467336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.480179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.480210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.492333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.492360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.505060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.505099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.516915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.516943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.528614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.528643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.540360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.540401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.986 [2024-07-24 20:36:11.551951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.986 [2024-07-24 20:36:11.551978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-07-24 20:36:11.563432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-07-24 20:36:11.563460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-07-24 20:36:11.575084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-07-24 20:36:11.575113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-07-24 20:36:11.586913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-07-24 20:36:11.586942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-07-24 20:36:11.599217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-07-24 20:36:11.599254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-07-24 20:36:11.612300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.243 [2024-07-24 20:36:11.612328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.243 [2024-07-24 20:36:11.625063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.625094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.637864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.637895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.650680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.650710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.664491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.664533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.678224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.678265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.691254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.691284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.703957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.703989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.716506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.716534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.729413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.729455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.742198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.742237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.754446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.754488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.767699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.767729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.780730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.780760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.793488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.793515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.244 [2024-07-24 20:36:11.806657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.244 [2024-07-24 20:36:11.806688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.819791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.819821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.833032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.833062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.846318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.846346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.858866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.858897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.872391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.872419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.885071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.885101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.898521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.898564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.911641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.911672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.924868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.924898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.937762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.937793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.950497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.950543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.963493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.963521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.976679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.976710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:11.989239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:11.989285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:12.011198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:12.011266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:12.025060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:12.025090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:12.038260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:12.038305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:12.051596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:12.051627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.502 [2024-07-24 20:36:12.064417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.502 [2024-07-24 20:36:12.064445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 [2024-07-24 20:36:12.076671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.759 [2024-07-24 20:36:12.076706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.759 00:08:16.760 Latency(us) 00:08:16.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.760 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:16.760 Nvme1n1 : 5.01 9863.62 77.06 0.00 0.00 12955.02 5509.88 23107.51 00:08:16.760 =================================================================================================================== 00:08:16.760 Total : 9863.62 77.06 0.00 0.00 12955.02 5509.88 23107.51 00:08:16.760 [2024-07-24 20:36:12.083529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.083554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.091564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.091592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.099586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.099613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.107633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.107674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.115669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.115715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.123690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.123734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.131723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.131768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.139728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.139773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.147754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.147800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.155778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.155822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.163809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.163854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.171825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.171873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.179849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.179898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.187870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.187913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.195890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.195935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.203912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.203955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.211932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.211975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.219913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.219937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.227935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.227960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.235955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.235979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.243978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.244002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.252013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.252046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.260063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.260118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.268086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.268132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.276070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.276096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.284088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.284113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.292109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.292133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.300132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.300156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.308151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.308174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.316234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.316289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.760 [2024-07-24 20:36:12.324259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.760 [2024-07-24 20:36:12.324303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-07-24 20:36:12.332229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-07-24 20:36:12.332266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-07-24 20:36:12.340251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-07-24 20:36:12.340289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-07-24 20:36:12.348271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-07-24 20:36:12.348306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 [2024-07-24 20:36:12.356309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.018 [2024-07-24 20:36:12.356332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1515786) - No such process 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1515786 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.018 delay0 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.018 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:17.018 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.018 [2024-07-24 20:36:12.483208] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:25.148 Initializing NVMe Controllers 00:08:25.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:25.148 Initialization complete. Launching workers. 00:08:25.148 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 16056 00:08:25.148 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16216, failed to submit 101 00:08:25.148 success 16137, unsuccess 79, failed 0 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.148 rmmod nvme_tcp 00:08:25.148 rmmod nvme_fabrics 00:08:25.148 rmmod nvme_keyring 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1514320 ']' 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1514320 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1514320 ']' 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1514320 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1514320 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1514320' 00:08:25.148 killing process with pid 1514320 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1514320 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1514320 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.148 20:36:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:26.521 00:08:26.521 real 0m29.546s 00:08:26.521 user 0m43.001s 00:08:26.521 sys 0m9.236s 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.521 ************************************ 00:08:26.521 END TEST nvmf_zcopy 00:08:26.521 ************************************ 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.521 ************************************ 00:08:26.521 START TEST nvmf_nmic 00:08:26.521 ************************************ 00:08:26.521 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:26.521 * Looking for test storage... 00:08:26.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:08:26.521 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:29.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:29.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:29.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:29.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.071 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:08:29.072 00:08:29.072 --- 10.0.0.2 ping statistics --- 00:08:29.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.072 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:29.072 00:08:29.072 --- 10.0.0.1 ping statistics --- 00:08:29.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.072 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1519815 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1519815 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1519815 ']' 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.072 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.072 [2024-07-24 20:36:24.326812] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:08:29.072 [2024-07-24 20:36:24.326899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.072 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.072 [2024-07-24 20:36:24.392892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.072 [2024-07-24 20:36:24.505103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.072 [2024-07-24 20:36:24.505158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.072 [2024-07-24 20:36:24.505171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.072 [2024-07-24 20:36:24.505182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.072 [2024-07-24 20:36:24.505191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.072 [2024-07-24 20:36:24.505267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.072 [2024-07-24 20:36:24.505310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.072 [2024-07-24 20:36:24.505884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.072 [2024-07-24 20:36:24.505973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 [2024-07-24 20:36:24.668742] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 Malloc0 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 [2024-07-24 20:36:24.721580] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:29.330 test case1: single bdev can't be used in multiple subsystems 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 [2024-07-24 20:36:24.745385] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:29.330 [2024-07-24 20:36:24.745415] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:29.330 [2024-07-24 20:36:24.745431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 request: 00:08:29.330 { 00:08:29.330 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:29.330 "namespace": { 00:08:29.330 "bdev_name": "Malloc0", 00:08:29.330 "no_auto_visible": false 00:08:29.330 }, 00:08:29.330 "method": "nvmf_subsystem_add_ns", 00:08:29.330 "req_id": 1 00:08:29.330 } 00:08:29.330 Got JSON-RPC error response 00:08:29.330 response: 00:08:29.330 { 00:08:29.330 "code": -32602, 00:08:29.330 "message": "Invalid parameters" 00:08:29.330 } 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:29.330 Adding namespace failed - expected result. 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:29.330 test case2: host connect to nvmf target in multiple paths 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:29.330 [2024-07-24 20:36:24.753536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.330 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.895 20:36:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:30.459 20:36:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:30.459 20:36:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:30.459 20:36:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:30.459 20:36:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:30.459 20:36:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:32.982 20:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:32.982 20:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:32.982 20:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:32.982 20:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:32.982 20:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:32.982 20:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:32.982 20:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:32.982 [global] 00:08:32.982 thread=1 00:08:32.982 invalidate=1 00:08:32.982 rw=write 00:08:32.982 time_based=1 00:08:32.982 runtime=1 00:08:32.982 ioengine=libaio 00:08:32.982 direct=1 00:08:32.982 bs=4096 00:08:32.982 iodepth=1 00:08:32.982 norandommap=0 00:08:32.982 numjobs=1 00:08:32.982 00:08:32.982 verify_dump=1 00:08:32.982 verify_backlog=512 00:08:32.982 verify_state_save=0 00:08:32.982 do_verify=1 00:08:32.982 verify=crc32c-intel 00:08:32.982 [job0] 00:08:32.982 filename=/dev/nvme0n1 00:08:32.982 Could not set queue depth (nvme0n1) 00:08:32.982 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.982 fio-3.35 00:08:32.982 Starting 1 thread 00:08:33.915 00:08:33.915 job0: (groupid=0, jobs=1): err= 0: pid=1520329: Wed Jul 24 20:36:29 2024 00:08:33.915 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:08:33.915 slat (nsec): min=15833, max=36569, avg=22089.33, stdev=8356.02 00:08:33.915 clat (usec): min=40800, max=42295, avg=41839.38, stdev=385.27 00:08:33.915 lat (usec): min=40836, max=42311, avg=41861.47, stdev=383.39 00:08:33.915 clat percentiles (usec): 00:08:33.915 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:08:33.915 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:08:33.915 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:33.915 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:33.915 | 99.99th=[42206] 00:08:33.915 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:08:33.915 slat (usec): min=6, max=28110, avg=70.42, stdev=1241.65 00:08:33.915 clat (usec): min=155, max=291, avg=179.03, stdev=11.30 00:08:33.915 lat (usec): min=163, max=28368, avg=249.46, stdev=1245.21 00:08:33.915 clat percentiles (usec): 00:08:33.915 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:08:33.915 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:08:33.915 | 70.00th=[ 184], 80.00th=[ 186], 90.00th=[ 190], 95.00th=[ 194], 00:08:33.915 | 99.00th=[ 204], 99.50th=[ 235], 99.90th=[ 293], 99.95th=[ 293], 00:08:33.915 | 99.99th=[ 293] 00:08:33.915 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:33.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:33.915 lat (usec) : 250=95.68%, 500=0.38% 00:08:33.915 lat (msec) : 50=3.94% 00:08:33.915 cpu : usr=0.50%, sys=0.69%, ctx=536, majf=0, minf=2 00:08:33.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.915 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.915 00:08:33.915 Run status group 0 (all jobs): 00:08:33.915 READ: bw=83.2KiB/s (85.2kB/s), 83.2KiB/s-83.2KiB/s (85.2kB/s-85.2kB/s), io=84.0KiB (86.0kB), run=1009-1009msec 00:08:33.915 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:08:33.915 00:08:33.915 Disk stats (read/write): 00:08:33.915 nvme0n1: ios=44/512, merge=0/0, ticks=1739/83, in_queue=1822, util=98.60% 00:08:33.915 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.173 rmmod nvme_tcp 00:08:34.173 rmmod nvme_fabrics 00:08:34.173 rmmod nvme_keyring 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1519815 ']' 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1519815 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1519815 ']' 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1519815 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1519815 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.173 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.174 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1519815' 00:08:34.174 killing process with pid 1519815 00:08:34.174 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1519815 00:08:34.174 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1519815 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.432 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.961 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.961 00:08:36.961 real 0m9.971s 00:08:36.961 user 0m22.480s 00:08:36.961 sys 0m2.319s 00:08:36.961 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.961 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:36.961 ************************************ 00:08:36.961 END TEST nvmf_nmic 00:08:36.961 ************************************ 00:08:36.961 20:36:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:36.961 20:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:36.961 20:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.961 20:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.961 ************************************ 00:08:36.961 START TEST nvmf_fio_target 00:08:36.961 ************************************ 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:36.961 * Looking for test storage... 00:08:36.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.961 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.962 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.863 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.863 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.864 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.864 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.864 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:38.864 00:08:38.864 --- 10.0.0.2 ping statistics --- 00:08:38.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.864 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:38.864 00:08:38.864 --- 10.0.0.1 ping statistics --- 00:08:38.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.864 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1522424 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1522424 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1522424 ']' 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.864 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:38.864 [2024-07-24 20:36:34.303923] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:08:38.864 [2024-07-24 20:36:34.303997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.864 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.864 [2024-07-24 20:36:34.371562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.123 [2024-07-24 20:36:34.482925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.123 [2024-07-24 20:36:34.482986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.123 [2024-07-24 20:36:34.483013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.123 [2024-07-24 20:36:34.483024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.123 [2024-07-24 20:36:34.483033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.123 [2024-07-24 20:36:34.483119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.123 [2024-07-24 20:36:34.483192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.123 [2024-07-24 20:36:34.483263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.123 [2024-07-24 20:36:34.483268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.123 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.123 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:39.123 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.123 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.123 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:39.123 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.123 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:39.380 [2024-07-24 20:36:34.918717] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.380 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:39.945 20:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:39.945 20:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.203 20:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:40.203 20:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.461 20:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:40.461 20:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.718 20:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:40.718 20:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:40.977 20:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.235 20:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:41.235 20:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.493 20:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:41.493 20:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.750 20:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:41.750 20:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:41.750 20:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.008 20:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:42.008 20:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.265 20:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:42.265 20:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:42.522 20:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.780 [2024-07-24 20:36:38.282592] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.780 20:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:43.037 20:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:43.299 20:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.911 20:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:43.911 20:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:43.911 20:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.911 20:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:43.911 20:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:43.911 20:36:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:46.435 20:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:46.435 20:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:46.435 20:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.435 20:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:46.435 20:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.435 20:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:46.435 20:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:46.435 [global] 00:08:46.435 thread=1 00:08:46.435 invalidate=1 00:08:46.436 rw=write 00:08:46.436 time_based=1 00:08:46.436 runtime=1 00:08:46.436 ioengine=libaio 00:08:46.436 direct=1 00:08:46.436 bs=4096 00:08:46.436 iodepth=1 00:08:46.436 norandommap=0 00:08:46.436 numjobs=1 00:08:46.436 00:08:46.436 verify_dump=1 00:08:46.436 verify_backlog=512 00:08:46.436 verify_state_save=0 00:08:46.436 do_verify=1 00:08:46.436 verify=crc32c-intel 00:08:46.436 [job0] 00:08:46.436 filename=/dev/nvme0n1 00:08:46.436 [job1] 00:08:46.436 filename=/dev/nvme0n2 00:08:46.436 [job2] 00:08:46.436 filename=/dev/nvme0n3 00:08:46.436 [job3] 00:08:46.436 filename=/dev/nvme0n4 00:08:46.436 Could not set queue depth (nvme0n1) 00:08:46.436 Could not set queue depth (nvme0n2) 00:08:46.436 Could not set queue depth (nvme0n3) 00:08:46.436 Could not set queue depth (nvme0n4) 00:08:46.436 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.436 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.436 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.436 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.436 fio-3.35 00:08:46.436 Starting 4 threads 00:08:47.369 00:08:47.369 job0: (groupid=0, jobs=1): err= 0: pid=1523490: Wed Jul 24 20:36:42 2024 00:08:47.369 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:47.369 slat (nsec): min=5616, max=50020, avg=12987.07, stdev=5814.68 00:08:47.369 clat (usec): min=243, max=40978, avg=377.22, stdev=1038.00 00:08:47.369 lat (usec): min=249, max=40984, avg=390.21, stdev=1037.91 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 306], 00:08:47.369 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 355], 00:08:47.369 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 445], 00:08:47.369 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[41157], 00:08:47.369 | 99.99th=[41157] 00:08:47.369 write: IOPS=1644, BW=6577KiB/s (6735kB/s)(6584KiB/1001msec); 0 zone resets 00:08:47.369 slat (nsec): min=6809, max=54959, avg=16116.31, stdev=6767.29 00:08:47.369 clat (usec): min=163, max=394, avg=218.71, stdev=28.65 00:08:47.369 lat (usec): min=172, max=423, avg=234.83, stdev=30.03 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:08:47.369 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:08:47.369 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 281], 00:08:47.369 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 388], 99.95th=[ 396], 00:08:47.369 | 99.99th=[ 396] 00:08:47.369 bw ( KiB/s): min= 8192, max= 8192, per=34.61%, avg=8192.00, stdev= 0.00, samples=1 00:08:47.369 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:47.369 lat (usec) : 250=46.29%, 500=53.02%, 750=0.66% 00:08:47.369 lat (msec) : 50=0.03% 00:08:47.369 cpu : usr=4.50%, sys=5.60%, ctx=3182, majf=0, minf=1 00:08:47.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 issued rwts: total=1536,1646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.369 job1: (groupid=0, jobs=1): err= 0: pid=1523491: Wed Jul 24 20:36:42 2024 00:08:47.369 read: IOPS=946, BW=3784KiB/s (3875kB/s)(3788KiB/1001msec) 00:08:47.369 slat (nsec): min=7105, max=69156, avg=21354.22, stdev=10434.61 00:08:47.369 clat (usec): min=246, max=42053, avg=708.19, stdev=3771.04 00:08:47.369 lat (usec): min=253, max=42069, avg=729.54, stdev=3770.60 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 289], 20.00th=[ 314], 00:08:47.369 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 379], 00:08:47.369 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 445], 95.00th=[ 465], 00:08:47.369 | 99.00th=[ 537], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:08:47.369 | 99.99th=[42206] 00:08:47.369 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:08:47.369 slat (usec): min=6, max=40606, avg=74.46, stdev=1418.46 00:08:47.369 clat (usec): min=156, max=409, avg=217.21, stdev=34.01 00:08:47.369 lat (usec): min=163, max=40836, avg=291.68, stdev=1421.36 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:08:47.369 | 30.00th=[ 194], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 223], 00:08:47.369 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 273], 00:08:47.369 | 99.00th=[ 326], 99.50th=[ 371], 99.90th=[ 396], 99.95th=[ 408], 00:08:47.369 | 99.99th=[ 408] 00:08:47.369 bw ( KiB/s): min= 4096, max= 4096, per=17.30%, avg=4096.00, stdev= 0.00, samples=1 00:08:47.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:47.369 lat (usec) : 250=44.70%, 500=54.34%, 750=0.56% 00:08:47.369 lat (msec) : 50=0.41% 00:08:47.369 cpu : usr=2.20%, sys=3.40%, ctx=1975, majf=0, minf=1 00:08:47.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 issued rwts: total=947,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.369 job2: (groupid=0, jobs=1): err= 0: pid=1523492: Wed Jul 24 20:36:42 2024 00:08:47.369 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:47.369 slat (nsec): min=5044, max=71558, avg=18517.15, stdev=10331.98 00:08:47.369 clat (usec): min=245, max=41839, avg=380.10, stdev=1095.18 00:08:47.369 lat (usec): min=259, max=41870, avg=398.62, stdev=1095.48 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:08:47.369 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 351], 00:08:47.369 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 445], 00:08:47.369 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 8586], 99.95th=[41681], 00:08:47.369 | 99.99th=[41681] 00:08:47.369 write: IOPS=1701, BW=6805KiB/s (6969kB/s)(6812KiB/1001msec); 0 zone resets 00:08:47.369 slat (nsec): min=6059, max=45103, avg=14084.60, stdev=5441.73 00:08:47.369 clat (usec): min=162, max=478, avg=203.98, stdev=19.74 00:08:47.369 lat (usec): min=168, max=524, avg=218.07, stdev=20.41 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:08:47.369 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:08:47.369 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:08:47.369 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 326], 99.95th=[ 478], 00:08:47.369 | 99.99th=[ 478] 00:08:47.369 bw ( KiB/s): min= 8192, max= 8192, per=34.61%, avg=8192.00, stdev= 0.00, samples=1 00:08:47.369 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:47.369 lat (usec) : 250=51.59%, 500=47.79%, 750=0.49%, 1000=0.03% 00:08:47.369 lat (msec) : 10=0.06%, 50=0.03% 00:08:47.369 cpu : usr=4.10%, sys=4.40%, ctx=3239, majf=0, minf=1 00:08:47.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 issued rwts: total=1536,1703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.369 job3: (groupid=0, jobs=1): err= 0: pid=1523493: Wed Jul 24 20:36:42 2024 00:08:47.369 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:47.369 slat (nsec): min=5651, max=50441, avg=13798.36, stdev=5824.59 00:08:47.369 clat (usec): min=263, max=8641, avg=366.95, stdev=219.80 00:08:47.369 lat (usec): min=270, max=8650, avg=380.75, stdev=220.11 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 322], 00:08:47.369 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:08:47.369 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 433], 95.00th=[ 478], 00:08:47.369 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 1254], 99.95th=[ 8586], 00:08:47.369 | 99.99th=[ 8586] 00:08:47.369 write: IOPS=1549, BW=6198KiB/s (6347kB/s)(6204KiB/1001msec); 0 zone resets 00:08:47.369 slat (nsec): min=7230, max=60942, avg=16776.74, stdev=7440.98 00:08:47.369 clat (usec): min=174, max=390, avg=241.69, stdev=23.65 00:08:47.369 lat (usec): min=184, max=404, avg=258.46, stdev=26.62 00:08:47.369 clat percentiles (usec): 00:08:47.369 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 223], 00:08:47.369 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:08:47.369 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:08:47.369 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 392], 00:08:47.369 | 99.99th=[ 392] 00:08:47.369 bw ( KiB/s): min= 8192, max= 8192, per=34.61%, avg=8192.00, stdev= 0.00, samples=1 00:08:47.369 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:47.369 lat (usec) : 250=33.20%, 500=65.57%, 750=1.10%, 1000=0.03% 00:08:47.369 lat (msec) : 2=0.06%, 10=0.03% 00:08:47.369 cpu : usr=4.30%, sys=5.90%, ctx=3088, majf=0, minf=2 00:08:47.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.369 issued rwts: total=1536,1551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.369 00:08:47.369 Run status group 0 (all jobs): 00:08:47.369 READ: bw=21.7MiB/s (22.7MB/s), 3784KiB/s-6138KiB/s (3875kB/s-6285kB/s), io=21.7MiB (22.8MB), run=1001-1001msec 00:08:47.369 WRITE: bw=23.1MiB/s (24.2MB/s), 4092KiB/s-6805KiB/s (4190kB/s-6969kB/s), io=23.1MiB (24.3MB), run=1001-1001msec 00:08:47.369 00:08:47.369 Disk stats (read/write): 00:08:47.369 nvme0n1: ios=1254/1536, merge=0/0, ticks=467/307, in_queue=774, util=87.07% 00:08:47.369 nvme0n2: ios=605/1024, merge=0/0, ticks=795/211, in_queue=1006, util=91.56% 00:08:47.369 nvme0n3: ios=1268/1536, merge=0/0, ticks=541/290, in_queue=831, util=94.98% 00:08:47.369 nvme0n4: ios=1212/1536, merge=0/0, ticks=504/354, in_queue=858, util=96.00% 00:08:47.370 20:36:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:47.370 [global] 00:08:47.370 thread=1 00:08:47.370 invalidate=1 00:08:47.370 rw=randwrite 00:08:47.370 time_based=1 00:08:47.370 runtime=1 00:08:47.370 ioengine=libaio 00:08:47.370 direct=1 00:08:47.370 bs=4096 00:08:47.370 iodepth=1 00:08:47.370 norandommap=0 00:08:47.370 numjobs=1 00:08:47.370 00:08:47.370 verify_dump=1 00:08:47.370 verify_backlog=512 00:08:47.370 verify_state_save=0 00:08:47.370 do_verify=1 00:08:47.370 verify=crc32c-intel 00:08:47.370 [job0] 00:08:47.370 filename=/dev/nvme0n1 00:08:47.370 [job1] 00:08:47.370 filename=/dev/nvme0n2 00:08:47.370 [job2] 00:08:47.370 filename=/dev/nvme0n3 00:08:47.370 [job3] 00:08:47.370 filename=/dev/nvme0n4 00:08:47.370 Could not set queue depth (nvme0n1) 00:08:47.370 Could not set queue depth (nvme0n2) 00:08:47.370 Could not set queue depth (nvme0n3) 00:08:47.370 Could not set queue depth (nvme0n4) 00:08:47.628 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.628 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.628 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.628 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.628 fio-3.35 00:08:47.628 Starting 4 threads 00:08:49.012 00:08:49.012 job0: (groupid=0, jobs=1): err= 0: pid=1523717: Wed Jul 24 20:36:44 2024 00:08:49.012 read: IOPS=396, BW=1584KiB/s (1622kB/s)(1624KiB/1025msec) 00:08:49.012 slat (nsec): min=4779, max=38710, avg=17408.41, stdev=8959.22 00:08:49.012 clat (usec): min=240, max=42185, avg=2250.67, stdev=8766.23 00:08:49.012 lat (usec): min=253, max=42223, avg=2268.08, stdev=8769.03 00:08:49.012 clat percentiles (usec): 00:08:49.012 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 269], 20.00th=[ 285], 00:08:49.012 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:08:49.012 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 371], 95.00th=[ 449], 00:08:49.012 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:49.012 | 99.99th=[42206] 00:08:49.012 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:08:49.012 slat (nsec): min=6322, max=30196, avg=9559.15, stdev=4884.66 00:08:49.012 clat (usec): min=161, max=361, avg=185.08, stdev=15.79 00:08:49.012 lat (usec): min=170, max=372, avg=194.64, stdev=16.97 00:08:49.012 clat percentiles (usec): 00:08:49.012 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:08:49.012 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:08:49.012 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:08:49.012 | 99.00th=[ 229], 99.50th=[ 277], 99.90th=[ 363], 99.95th=[ 363], 00:08:49.012 | 99.99th=[ 363] 00:08:49.012 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:08:49.012 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:49.012 lat (usec) : 250=57.63%, 500=40.20% 00:08:49.012 lat (msec) : 2=0.11%, 50=2.07% 00:08:49.012 cpu : usr=0.88%, sys=0.98%, ctx=920, majf=0, minf=1 00:08:49.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.012 issued rwts: total=406,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.012 job1: (groupid=0, jobs=1): err= 0: pid=1523719: Wed Jul 24 20:36:44 2024 00:08:49.012 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:08:49.012 slat (nsec): min=13664, max=35533, avg=29522.67, stdev=8359.58 00:08:49.012 clat (usec): min=40939, max=41989, avg=41686.26, stdev=438.40 00:08:49.012 lat (usec): min=40957, max=42006, avg=41715.78, stdev=442.46 00:08:49.012 clat percentiles (usec): 00:08:49.012 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:49.012 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:08:49.012 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:49.012 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:49.012 | 99.99th=[42206] 00:08:49.012 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:08:49.012 slat (nsec): min=6866, max=55576, avg=13196.00, stdev=6456.44 00:08:49.012 clat (usec): min=163, max=393, avg=233.19, stdev=32.29 00:08:49.012 lat (usec): min=174, max=410, avg=246.38, stdev=31.27 00:08:49.012 clat percentiles (usec): 00:08:49.012 | 1.00th=[ 178], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:08:49.012 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:08:49.012 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 306], 00:08:49.012 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 396], 99.95th=[ 396], 00:08:49.012 | 99.99th=[ 396] 00:08:49.012 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:08:49.012 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:49.012 lat (usec) : 250=79.92%, 500=16.14% 00:08:49.012 lat (msec) : 50=3.94% 00:08:49.012 cpu : usr=0.40%, sys=0.90%, ctx=535, majf=0, minf=2 00:08:49.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.012 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.012 job2: (groupid=0, jobs=1): err= 0: pid=1523723: Wed Jul 24 20:36:44 2024 00:08:49.012 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:08:49.012 slat (nsec): min=12885, max=43861, avg=30103.05, stdev=8360.34 00:08:49.012 clat (usec): min=40540, max=41139, avg=40943.53, stdev=112.84 00:08:49.012 lat (usec): min=40573, max=41158, avg=40973.63, stdev=111.81 00:08:49.012 clat percentiles (usec): 00:08:49.012 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:49.012 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:49.012 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:49.012 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:49.012 | 99.99th=[41157] 00:08:49.012 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:08:49.013 slat (nsec): min=6380, max=40997, avg=11052.05, stdev=6540.02 00:08:49.013 clat (usec): min=188, max=1134, avg=237.53, stdev=49.45 00:08:49.013 lat (usec): min=195, max=1143, avg=248.58, stdev=50.94 00:08:49.013 clat percentiles (usec): 00:08:49.013 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:08:49.013 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 239], 00:08:49.013 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:08:49.013 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 1139], 99.95th=[ 1139], 00:08:49.013 | 99.99th=[ 1139] 00:08:49.013 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:08:49.013 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:49.013 lat (usec) : 250=73.22%, 500=22.47% 00:08:49.013 lat (msec) : 2=0.19%, 50=4.12% 00:08:49.013 cpu : usr=0.29%, sys=0.58%, ctx=535, majf=0, minf=1 00:08:49.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.013 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.013 job3: (groupid=0, jobs=1): err= 0: pid=1523724: Wed Jul 24 20:36:44 2024 00:08:49.013 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:49.013 slat (nsec): min=4711, max=54627, avg=14615.98, stdev=7709.75 00:08:49.013 clat (usec): min=209, max=456, avg=254.68, stdev=31.85 00:08:49.013 lat (usec): min=214, max=479, avg=269.30, stdev=35.20 00:08:49.013 clat percentiles (usec): 00:08:49.013 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 233], 00:08:49.013 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:08:49.013 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 318], 00:08:49.013 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 429], 99.95th=[ 441], 00:08:49.013 | 99.99th=[ 457] 00:08:49.013 write: IOPS=2059, BW=8240KiB/s (8438kB/s)(8248KiB/1001msec); 0 zone resets 00:08:49.013 slat (nsec): min=6367, max=40931, avg=15165.59, stdev=4612.62 00:08:49.013 clat (usec): min=155, max=476, avg=193.87, stdev=36.46 00:08:49.013 lat (usec): min=165, max=501, avg=209.04, stdev=35.50 00:08:49.013 clat percentiles (usec): 00:08:49.013 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:08:49.013 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:08:49.013 | 70.00th=[ 196], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 253], 00:08:49.013 | 99.00th=[ 343], 99.50th=[ 392], 99.90th=[ 465], 99.95th=[ 469], 00:08:49.013 | 99.99th=[ 478] 00:08:49.013 bw ( KiB/s): min= 8192, max= 8192, per=58.69%, avg=8192.00, stdev= 0.00, samples=1 00:08:49.013 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:49.013 lat (usec) : 250=77.93%, 500=22.07% 00:08:49.013 cpu : usr=3.20%, sys=6.60%, ctx=4111, majf=0, minf=1 00:08:49.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.013 issued rwts: total=2048,2062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.013 00:08:49.013 Run status group 0 (all jobs): 00:08:49.013 READ: bw=9688KiB/s (9920kB/s), 83.6KiB/s-8184KiB/s (85.6kB/s-8380kB/s), io=9988KiB (10.2MB), run=1001-1031msec 00:08:49.013 WRITE: bw=13.6MiB/s (14.3MB/s), 1986KiB/s-8240KiB/s (2034kB/s-8438kB/s), io=14.1MiB (14.7MB), run=1001-1031msec 00:08:49.013 00:08:49.013 Disk stats (read/write): 00:08:49.013 nvme0n1: ios=413/512, merge=0/0, ticks=1703/88, in_queue=1791, util=97.90% 00:08:49.013 nvme0n2: ios=67/512, merge=0/0, ticks=1689/114, in_queue=1803, util=98.17% 00:08:49.013 nvme0n3: ios=75/512, merge=0/0, ticks=1376/116, in_queue=1492, util=98.01% 00:08:49.013 nvme0n4: ios=1593/1967, merge=0/0, ticks=709/373, in_queue=1082, util=98.00% 00:08:49.013 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:49.013 [global] 00:08:49.013 thread=1 00:08:49.013 invalidate=1 00:08:49.013 rw=write 00:08:49.013 time_based=1 00:08:49.013 runtime=1 00:08:49.013 ioengine=libaio 00:08:49.013 direct=1 00:08:49.013 bs=4096 00:08:49.013 iodepth=128 00:08:49.013 norandommap=0 00:08:49.013 numjobs=1 00:08:49.013 00:08:49.013 verify_dump=1 00:08:49.013 verify_backlog=512 00:08:49.013 verify_state_save=0 00:08:49.013 do_verify=1 00:08:49.013 verify=crc32c-intel 00:08:49.013 [job0] 00:08:49.013 filename=/dev/nvme0n1 00:08:49.013 [job1] 00:08:49.013 filename=/dev/nvme0n2 00:08:49.013 [job2] 00:08:49.013 filename=/dev/nvme0n3 00:08:49.013 [job3] 00:08:49.013 filename=/dev/nvme0n4 00:08:49.013 Could not set queue depth (nvme0n1) 00:08:49.013 Could not set queue depth (nvme0n2) 00:08:49.013 Could not set queue depth (nvme0n3) 00:08:49.013 Could not set queue depth (nvme0n4) 00:08:49.013 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.013 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.013 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.013 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.013 fio-3.35 00:08:49.013 Starting 4 threads 00:08:50.383 00:08:50.383 job0: (groupid=0, jobs=1): err= 0: pid=1524077: Wed Jul 24 20:36:45 2024 00:08:50.383 read: IOPS=1530, BW=6122KiB/s (6269kB/s)(6404KiB/1046msec) 00:08:50.383 slat (usec): min=3, max=46549, avg=362.16, stdev=2756.63 00:08:50.383 clat (msec): min=8, max=125, avg=54.10, stdev=36.26 00:08:50.383 lat (msec): min=10, max=126, avg=54.46, stdev=36.41 00:08:50.383 clat percentiles (msec): 00:08:50.383 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 23], 00:08:50.383 | 30.00th=[ 25], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 54], 00:08:50.383 | 70.00th=[ 70], 80.00th=[ 97], 90.00th=[ 115], 95.00th=[ 123], 00:08:50.383 | 99.00th=[ 127], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 127], 00:08:50.383 | 99.99th=[ 127] 00:08:50.383 write: IOPS=1957, BW=7832KiB/s (8020kB/s)(8192KiB/1046msec); 0 zone resets 00:08:50.383 slat (usec): min=4, max=37122, avg=195.65, stdev=1529.26 00:08:50.383 clat (usec): min=8260, max=69401, avg=21044.49, stdev=15041.94 00:08:50.383 lat (usec): min=10536, max=69419, avg=21240.14, stdev=15131.87 00:08:50.383 clat percentiles (usec): 00:08:50.383 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10814], 20.00th=[11076], 00:08:50.383 | 30.00th=[14484], 40.00th=[14877], 50.00th=[16712], 60.00th=[17433], 00:08:50.383 | 70.00th=[17695], 80.00th=[21103], 90.00th=[49021], 95.00th=[61604], 00:08:50.383 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:08:50.383 | 99.99th=[69731] 00:08:50.383 bw ( KiB/s): min= 7688, max= 8192, per=12.59%, avg=7940.00, stdev=356.38, samples=2 00:08:50.383 iops : min= 1922, max= 2048, avg=1985.00, stdev=89.10, samples=2 00:08:50.383 lat (msec) : 10=0.55%, 20=47.11%, 50=28.83%, 100=15.29%, 250=8.22% 00:08:50.383 cpu : usr=2.11%, sys=4.11%, ctx=121, majf=0, minf=1 00:08:50.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:08:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.383 issued rwts: total=1601,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.383 job1: (groupid=0, jobs=1): err= 0: pid=1524078: Wed Jul 24 20:36:45 2024 00:08:50.383 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:08:50.383 slat (usec): min=2, max=11364, avg=94.84, stdev=661.30 00:08:50.383 clat (usec): min=3797, max=31640, avg=11787.78, stdev=3624.53 00:08:50.383 lat (usec): min=3804, max=31648, avg=11882.63, stdev=3666.92 00:08:50.383 clat percentiles (usec): 00:08:50.383 | 1.00th=[ 4015], 5.00th=[ 7570], 10.00th=[ 9503], 20.00th=[ 9896], 00:08:50.383 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11469], 00:08:50.383 | 70.00th=[11994], 80.00th=[13566], 90.00th=[16188], 95.00th=[18744], 00:08:50.383 | 99.00th=[26608], 99.50th=[28967], 99.90th=[31589], 99.95th=[31589], 00:08:50.383 | 99.99th=[31589] 00:08:50.383 write: IOPS=4981, BW=19.5MiB/s (20.4MB/s)(19.7MiB/1010msec); 0 zone resets 00:08:50.383 slat (usec): min=4, max=18944, avg=102.97, stdev=589.60 00:08:50.383 clat (usec): min=338, max=63959, avg=14099.98, stdev=11030.97 00:08:50.383 lat (usec): min=567, max=63966, avg=14202.94, stdev=11099.64 00:08:50.383 clat percentiles (usec): 00:08:50.383 | 1.00th=[ 963], 5.00th=[ 5145], 10.00th=[ 7832], 20.00th=[10028], 00:08:50.383 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11731], 00:08:50.383 | 70.00th=[11994], 80.00th=[12518], 90.00th=[22938], 95.00th=[41681], 00:08:50.383 | 99.00th=[62653], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:08:50.383 | 99.99th=[63701] 00:08:50.383 bw ( KiB/s): min=15632, max=23600, per=31.11%, avg=19616.00, stdev=5634.23, samples=2 00:08:50.383 iops : min= 3908, max= 5900, avg=4904.00, stdev=1408.56, samples=2 00:08:50.383 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.46% 00:08:50.383 lat (msec) : 2=0.04%, 4=1.55%, 10=21.36%, 20=67.74%, 50=6.74% 00:08:50.383 lat (msec) : 100=2.04% 00:08:50.383 cpu : usr=4.06%, sys=5.95%, ctx=618, majf=0, minf=1 00:08:50.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.383 issued rwts: total=4608,5031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.383 job2: (groupid=0, jobs=1): err= 0: pid=1524079: Wed Jul 24 20:36:45 2024 00:08:50.383 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:08:50.383 slat (usec): min=2, max=11184, avg=105.22, stdev=740.52 00:08:50.383 clat (usec): min=1445, max=33919, avg=13785.09, stdev=3889.79 00:08:50.383 lat (usec): min=1453, max=33936, avg=13890.30, stdev=3941.36 00:08:50.383 clat percentiles (usec): 00:08:50.383 | 1.00th=[ 4178], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[11469], 00:08:50.383 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13304], 60.00th=[13960], 00:08:50.383 | 70.00th=[14615], 80.00th=[15270], 90.00th=[17171], 95.00th=[19530], 00:08:50.383 | 99.00th=[30278], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:08:50.383 | 99.99th=[33817] 00:08:50.383 write: IOPS=4487, BW=17.5MiB/s (18.4MB/s)(17.7MiB/1012msec); 0 zone resets 00:08:50.383 slat (usec): min=4, max=11202, avg=112.52, stdev=690.17 00:08:50.383 clat (usec): min=1443, max=38245, avg=15819.91, stdev=7539.76 00:08:50.383 lat (usec): min=1456, max=38259, avg=15932.43, stdev=7599.58 00:08:50.383 clat percentiles (usec): 00:08:50.383 | 1.00th=[ 4621], 5.00th=[ 7373], 10.00th=[ 8586], 20.00th=[10159], 00:08:50.383 | 30.00th=[11600], 40.00th=[11863], 50.00th=[13173], 60.00th=[13960], 00:08:50.383 | 70.00th=[16909], 80.00th=[22938], 90.00th=[28967], 95.00th=[31065], 00:08:50.383 | 99.00th=[35390], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:08:50.383 | 99.99th=[38011] 00:08:50.383 bw ( KiB/s): min=16368, max=18944, per=28.00%, avg=17656.00, stdev=1821.51, samples=2 00:08:50.383 iops : min= 4092, max= 4736, avg=4414.00, stdev=455.38, samples=2 00:08:50.383 lat (msec) : 2=0.15%, 4=0.66%, 10=11.75%, 20=71.46%, 50=15.98% 00:08:50.384 cpu : usr=6.82%, sys=9.50%, ctx=306, majf=0, minf=1 00:08:50.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:50.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.384 issued rwts: total=4096,4541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.384 job3: (groupid=0, jobs=1): err= 0: pid=1524080: Wed Jul 24 20:36:45 2024 00:08:50.384 read: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1015msec) 00:08:50.384 slat (usec): min=3, max=12299, avg=106.74, stdev=763.04 00:08:50.384 clat (usec): min=4935, max=34182, avg=13438.10, stdev=3601.17 00:08:50.384 lat (usec): min=4941, max=34188, avg=13544.84, stdev=3661.44 00:08:50.384 clat percentiles (usec): 00:08:50.384 | 1.00th=[ 6259], 5.00th=[10159], 10.00th=[11338], 20.00th=[11731], 00:08:50.384 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:08:50.384 | 70.00th=[12911], 80.00th=[13698], 90.00th=[17695], 95.00th=[21103], 00:08:50.384 | 99.00th=[28705], 99.50th=[31065], 99.90th=[34341], 99.95th=[34341], 00:08:50.384 | 99.99th=[34341] 00:08:50.384 write: IOPS=4798, BW=18.7MiB/s (19.7MB/s)(19.0MiB/1015msec); 0 zone resets 00:08:50.384 slat (usec): min=3, max=10653, avg=92.35, stdev=570.43 00:08:50.384 clat (usec): min=362, max=46229, avg=13665.28, stdev=6874.55 00:08:50.384 lat (usec): min=950, max=46236, avg=13757.63, stdev=6932.07 00:08:50.384 clat percentiles (usec): 00:08:50.384 | 1.00th=[ 3392], 5.00th=[ 5997], 10.00th=[ 7635], 20.00th=[10159], 00:08:50.384 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12780], 00:08:50.384 | 70.00th=[13304], 80.00th=[14746], 90.00th=[22152], 95.00th=[28967], 00:08:50.384 | 99.00th=[41157], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:08:50.384 | 99.99th=[46400] 00:08:50.384 bw ( KiB/s): min=15936, max=22044, per=30.11%, avg=18990.00, stdev=4319.01, samples=2 00:08:50.384 iops : min= 3984, max= 5511, avg=4747.50, stdev=1079.75, samples=2 00:08:50.384 lat (usec) : 500=0.01%, 1000=0.04% 00:08:50.384 lat (msec) : 2=0.11%, 4=1.02%, 10=10.53%, 20=78.40%, 50=9.89% 00:08:50.384 cpu : usr=5.42%, sys=7.69%, ctx=414, majf=0, minf=1 00:08:50.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:50.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.384 issued rwts: total=4608,4870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.384 00:08:50.384 Run status group 0 (all jobs): 00:08:50.384 READ: bw=55.7MiB/s (58.4MB/s), 6122KiB/s-17.8MiB/s (6269kB/s-18.7MB/s), io=58.3MiB (61.1MB), run=1010-1046msec 00:08:50.384 WRITE: bw=61.6MiB/s (64.6MB/s), 7832KiB/s-19.5MiB/s (8020kB/s-20.4MB/s), io=64.4MiB (67.5MB), run=1010-1046msec 00:08:50.384 00:08:50.384 Disk stats (read/write): 00:08:50.384 nvme0n1: ios=1430/1536, merge=0/0, ticks=17866/8515, in_queue=26381, util=97.19% 00:08:50.384 nvme0n2: ios=4118/4607, merge=0/0, ticks=43192/52074, in_queue=95266, util=97.97% 00:08:50.384 nvme0n3: ios=3602/3708, merge=0/0, ticks=45705/53363, in_queue=99068, util=97.08% 00:08:50.384 nvme0n4: ios=4096/4295, merge=0/0, ticks=52400/48669, in_queue=101069, util=89.59% 00:08:50.384 20:36:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:50.384 [global] 00:08:50.384 thread=1 00:08:50.384 invalidate=1 00:08:50.384 rw=randwrite 00:08:50.384 time_based=1 00:08:50.384 runtime=1 00:08:50.384 ioengine=libaio 00:08:50.384 direct=1 00:08:50.384 bs=4096 00:08:50.384 iodepth=128 00:08:50.384 norandommap=0 00:08:50.384 numjobs=1 00:08:50.384 00:08:50.384 verify_dump=1 00:08:50.384 verify_backlog=512 00:08:50.384 verify_state_save=0 00:08:50.384 do_verify=1 00:08:50.384 verify=crc32c-intel 00:08:50.384 [job0] 00:08:50.384 filename=/dev/nvme0n1 00:08:50.384 [job1] 00:08:50.384 filename=/dev/nvme0n2 00:08:50.384 [job2] 00:08:50.384 filename=/dev/nvme0n3 00:08:50.384 [job3] 00:08:50.384 filename=/dev/nvme0n4 00:08:50.384 Could not set queue depth (nvme0n1) 00:08:50.384 Could not set queue depth (nvme0n2) 00:08:50.384 Could not set queue depth (nvme0n3) 00:08:50.384 Could not set queue depth (nvme0n4) 00:08:50.641 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.641 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.641 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.641 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.641 fio-3.35 00:08:50.641 Starting 4 threads 00:08:52.011 00:08:52.011 job0: (groupid=0, jobs=1): err= 0: pid=1524304: Wed Jul 24 20:36:47 2024 00:08:52.011 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:08:52.011 slat (usec): min=3, max=10440, avg=112.02, stdev=622.00 00:08:52.011 clat (usec): min=4937, max=34998, avg=13651.62, stdev=3663.35 00:08:52.011 lat (usec): min=4950, max=35017, avg=13763.64, stdev=3720.00 00:08:52.011 clat percentiles (usec): 00:08:52.011 | 1.00th=[ 7439], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10421], 00:08:52.011 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:08:52.011 | 70.00th=[14222], 80.00th=[15926], 90.00th=[16909], 95.00th=[19792], 00:08:52.011 | 99.00th=[29230], 99.50th=[31327], 99.90th=[34866], 99.95th=[34866], 00:08:52.011 | 99.99th=[34866] 00:08:52.011 write: IOPS=3308, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1008msec); 0 zone resets 00:08:52.011 slat (usec): min=5, max=18571, avg=180.88, stdev=1008.46 00:08:52.011 clat (msec): min=3, max=123, avg=25.67, stdev=21.76 00:08:52.011 lat (msec): min=4, max=123, avg=25.86, stdev=21.89 00:08:52.011 clat percentiles (msec): 00:08:52.011 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:08:52.011 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 22], 00:08:52.011 | 70.00th=[ 30], 80.00th=[ 36], 90.00th=[ 50], 95.00th=[ 72], 00:08:52.011 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:08:52.011 | 99.99th=[ 124] 00:08:52.011 bw ( KiB/s): min=12288, max=13368, per=18.85%, avg=12828.00, stdev=763.68, samples=2 00:08:52.011 iops : min= 3072, max= 3342, avg=3207.00, stdev=190.92, samples=2 00:08:52.011 lat (msec) : 4=0.02%, 10=9.86%, 20=64.52%, 50=20.82%, 100=3.32% 00:08:52.011 lat (msec) : 250=1.45% 00:08:52.011 cpu : usr=5.96%, sys=7.45%, ctx=383, majf=0, minf=1 00:08:52.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:08:52.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.011 issued rwts: total=3072,3335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.011 job1: (groupid=0, jobs=1): err= 0: pid=1524305: Wed Jul 24 20:36:47 2024 00:08:52.011 read: IOPS=4381, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1005msec) 00:08:52.011 slat (usec): min=2, max=10366, avg=98.13, stdev=628.34 00:08:52.011 clat (usec): min=2377, max=31018, avg=12564.68, stdev=3668.48 00:08:52.011 lat (usec): min=4099, max=34394, avg=12662.81, stdev=3712.65 00:08:52.011 clat percentiles (usec): 00:08:52.011 | 1.00th=[ 5800], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[ 9896], 00:08:52.011 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11863], 60.00th=[12649], 00:08:52.011 | 70.00th=[14091], 80.00th=[14484], 90.00th=[17433], 95.00th=[20579], 00:08:52.011 | 99.00th=[24773], 99.50th=[26084], 99.90th=[29230], 99.95th=[29230], 00:08:52.011 | 99.99th=[31065] 00:08:52.011 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:08:52.011 slat (usec): min=3, max=10719, avg=112.78, stdev=571.47 00:08:52.011 clat (usec): min=1468, max=47834, avg=15668.61, stdev=9870.97 00:08:52.011 lat (usec): min=1480, max=47851, avg=15781.39, stdev=9943.92 00:08:52.011 clat percentiles (usec): 00:08:52.011 | 1.00th=[ 4228], 5.00th=[ 6390], 10.00th=[ 8586], 20.00th=[ 9503], 00:08:52.011 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11338], 60.00th=[11994], 00:08:52.011 | 70.00th=[13960], 80.00th=[21890], 90.00th=[33424], 95.00th=[37487], 00:08:52.011 | 99.00th=[45876], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:08:52.011 | 99.99th=[47973] 00:08:52.011 bw ( KiB/s): min=12992, max=23872, per=27.09%, avg=18432.00, stdev=7693.32, samples=2 00:08:52.011 iops : min= 3248, max= 5968, avg=4608.00, stdev=1923.33, samples=2 00:08:52.011 lat (msec) : 2=0.02%, 4=0.39%, 10=22.51%, 20=62.08%, 50=15.00% 00:08:52.011 cpu : usr=6.47%, sys=9.36%, ctx=521, majf=0, minf=1 00:08:52.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:52.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.011 issued rwts: total=4403,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.011 job2: (groupid=0, jobs=1): err= 0: pid=1524306: Wed Jul 24 20:36:47 2024 00:08:52.011 read: IOPS=4268, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1004msec) 00:08:52.011 slat (usec): min=3, max=14664, avg=115.37, stdev=762.39 00:08:52.011 clat (usec): min=759, max=29090, avg=14536.89, stdev=3892.28 00:08:52.011 lat (usec): min=4084, max=29105, avg=14652.26, stdev=3939.58 00:08:52.011 clat percentiles (usec): 00:08:52.011 | 1.00th=[ 6128], 5.00th=[10159], 10.00th=[11600], 20.00th=[12256], 00:08:52.011 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13829], 00:08:52.011 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19530], 95.00th=[23462], 00:08:52.011 | 99.00th=[27132], 99.50th=[27657], 99.90th=[28967], 99.95th=[28967], 00:08:52.011 | 99.99th=[28967] 00:08:52.011 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:08:52.011 slat (usec): min=4, max=8600, avg=97.50, stdev=450.27 00:08:52.011 clat (usec): min=1516, max=35568, avg=14047.96, stdev=4859.51 00:08:52.011 lat (usec): min=1526, max=35590, avg=14145.46, stdev=4895.78 00:08:52.011 clat percentiles (usec): 00:08:52.011 | 1.00th=[ 3064], 5.00th=[ 6980], 10.00th=[ 9110], 20.00th=[11731], 00:08:52.011 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13566], 60.00th=[14222], 00:08:52.011 | 70.00th=[14877], 80.00th=[15664], 90.00th=[18744], 95.00th=[24249], 00:08:52.011 | 99.00th=[31589], 99.50th=[32375], 99.90th=[35390], 99.95th=[35390], 00:08:52.011 | 99.99th=[35390] 00:08:52.011 bw ( KiB/s): min=16624, max=20240, per=27.09%, avg=18432.00, stdev=2556.90, samples=2 00:08:52.011 iops : min= 4156, max= 5060, avg=4608.00, stdev=639.22, samples=2 00:08:52.011 lat (usec) : 1000=0.01% 00:08:52.011 lat (msec) : 2=0.21%, 4=0.76%, 10=7.42%, 20=82.58%, 50=9.01% 00:08:52.011 cpu : usr=6.48%, sys=9.87%, ctx=516, majf=0, minf=1 00:08:52.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:52.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.011 issued rwts: total=4286,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.011 job3: (groupid=0, jobs=1): err= 0: pid=1524307: Wed Jul 24 20:36:47 2024 00:08:52.011 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:08:52.011 slat (usec): min=2, max=20112, avg=120.71, stdev=925.31 00:08:52.011 clat (usec): min=5446, max=38635, avg=15803.00, stdev=5291.53 00:08:52.011 lat (usec): min=5459, max=43625, avg=15923.71, stdev=5367.79 00:08:52.011 clat percentiles (usec): 00:08:52.011 | 1.00th=[ 6915], 5.00th=[11338], 10.00th=[12518], 20.00th=[13042], 00:08:52.011 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13960], 60.00th=[14615], 00:08:52.011 | 70.00th=[15795], 80.00th=[17171], 90.00th=[22414], 95.00th=[30016], 00:08:52.011 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:08:52.011 | 99.99th=[38536] 00:08:52.011 write: IOPS=4587, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:08:52.011 slat (usec): min=4, max=11012, avg=92.26, stdev=694.98 00:08:52.011 clat (usec): min=634, max=40171, avg=13522.91, stdev=5053.01 00:08:52.011 lat (usec): min=917, max=40177, avg=13615.17, stdev=5108.51 00:08:52.011 clat percentiles (usec): 00:08:52.011 | 1.00th=[ 3851], 5.00th=[ 6587], 10.00th=[ 8455], 20.00th=[11731], 00:08:52.012 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:08:52.012 | 70.00th=[13304], 80.00th=[13698], 90.00th=[19530], 95.00th=[22414], 00:08:52.012 | 99.00th=[33817], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:08:52.012 | 99.99th=[40109] 00:08:52.012 bw ( KiB/s): min=15784, max=19976, per=26.28%, avg=17880.00, stdev=2964.19, samples=2 00:08:52.012 iops : min= 3946, max= 4994, avg=4470.00, stdev=741.05, samples=2 00:08:52.012 lat (usec) : 750=0.01%, 1000=0.01% 00:08:52.012 lat (msec) : 2=0.06%, 4=0.45%, 10=8.71%, 20=80.34%, 50=10.42% 00:08:52.012 cpu : usr=5.29%, sys=8.59%, ctx=260, majf=0, minf=1 00:08:52.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:52.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.012 issued rwts: total=4096,4597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.012 00:08:52.012 Run status group 0 (all jobs): 00:08:52.012 READ: bw=61.4MiB/s (64.4MB/s), 11.9MiB/s-17.1MiB/s (12.5MB/s-17.9MB/s), io=61.9MiB (64.9MB), run=1002-1008msec 00:08:52.012 WRITE: bw=66.5MiB/s (69.7MB/s), 12.9MiB/s-17.9MiB/s (13.6MB/s-18.8MB/s), io=67.0MiB (70.2MB), run=1002-1008msec 00:08:52.012 00:08:52.012 Disk stats (read/write): 00:08:52.012 nvme0n1: ios=2609/2775, merge=0/0, ticks=24814/55662, in_queue=80476, util=84.47% 00:08:52.012 nvme0n2: ios=3948/4096, merge=0/0, ticks=37083/41147, in_queue=78230, util=89.51% 00:08:52.012 nvme0n3: ios=3613/3663, merge=0/0, ticks=40953/42259, in_queue=83212, util=92.53% 00:08:52.012 nvme0n4: ios=3531/3584, merge=0/0, ticks=40042/35729, in_queue=75771, util=93.94% 00:08:52.012 20:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:52.012 20:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1524445 00:08:52.012 20:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:52.012 20:36:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:52.012 [global] 00:08:52.012 thread=1 00:08:52.012 invalidate=1 00:08:52.012 rw=read 00:08:52.012 time_based=1 00:08:52.012 runtime=10 00:08:52.012 ioengine=libaio 00:08:52.012 direct=1 00:08:52.012 bs=4096 00:08:52.012 iodepth=1 00:08:52.012 norandommap=1 00:08:52.012 numjobs=1 00:08:52.012 00:08:52.012 [job0] 00:08:52.012 filename=/dev/nvme0n1 00:08:52.012 [job1] 00:08:52.012 filename=/dev/nvme0n2 00:08:52.012 [job2] 00:08:52.012 filename=/dev/nvme0n3 00:08:52.012 [job3] 00:08:52.012 filename=/dev/nvme0n4 00:08:52.012 Could not set queue depth (nvme0n1) 00:08:52.012 Could not set queue depth (nvme0n2) 00:08:52.012 Could not set queue depth (nvme0n3) 00:08:52.012 Could not set queue depth (nvme0n4) 00:08:52.012 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.012 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.012 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.012 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.012 fio-3.35 00:08:52.012 Starting 4 threads 00:08:55.288 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:55.288 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:55.288 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2056192, buflen=4096 00:08:55.288 fio: pid=1524542, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:55.288 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:55.288 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:55.288 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=495616, buflen=4096 00:08:55.288 fio: pid=1524541, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:55.546 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:55.546 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:55.546 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3608576, buflen=4096 00:08:55.546 fio: pid=1524539, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:56.112 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.112 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:56.112 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=532480, buflen=4096 00:08:56.112 fio: pid=1524540, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:56.112 00:08:56.112 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1524539: Wed Jul 24 20:36:51 2024 00:08:56.112 read: IOPS=255, BW=1021KiB/s (1045kB/s)(3524KiB/3452msec) 00:08:56.112 slat (usec): min=5, max=29848, avg=64.31, stdev=1113.15 00:08:56.112 clat (usec): min=238, max=42198, avg=3823.30, stdev=11553.04 00:08:56.112 lat (usec): min=244, max=71957, avg=3873.36, stdev=11712.55 00:08:56.112 clat percentiles (usec): 00:08:56.112 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 262], 00:08:56.112 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:08:56.112 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 375], 95.00th=[41157], 00:08:56.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:56.112 | 99.99th=[42206] 00:08:56.112 bw ( KiB/s): min= 96, max= 168, per=6.39%, avg=112.00, stdev=28.17, samples=6 00:08:56.112 iops : min= 24, max= 42, avg=28.00, stdev= 7.04, samples=6 00:08:56.112 lat (usec) : 250=3.74%, 500=86.96%, 750=0.57% 00:08:56.112 lat (msec) : 50=8.62% 00:08:56.112 cpu : usr=0.17%, sys=0.29%, ctx=887, majf=0, minf=1 00:08:56.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 issued rwts: total=882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.112 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1524540: Wed Jul 24 20:36:51 2024 00:08:56.112 read: IOPS=35, BW=139KiB/s (143kB/s)(520KiB/3731msec) 00:08:56.112 slat (usec): min=12, max=3869, avg=54.85, stdev=335.99 00:08:56.112 clat (usec): min=366, max=42450, avg=28464.44, stdev=19060.21 00:08:56.112 lat (usec): min=401, max=44997, avg=28519.44, stdev=19078.02 00:08:56.112 clat percentiles (usec): 00:08:56.112 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 420], 20.00th=[ 465], 00:08:56.112 | 30.00th=[ 775], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:56.112 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:08:56.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:56.112 | 99.99th=[42206] 00:08:56.112 bw ( KiB/s): min= 100, max= 184, per=8.11%, avg=142.29, stdev=30.63, samples=7 00:08:56.112 iops : min= 25, max= 46, avg=35.57, stdev= 7.66, samples=7 00:08:56.112 lat (usec) : 500=28.24%, 750=0.76%, 1000=1.53% 00:08:56.112 lat (msec) : 4=0.76%, 50=67.94% 00:08:56.112 cpu : usr=0.00%, sys=0.19%, ctx=133, majf=0, minf=1 00:08:56.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 issued rwts: total=131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.112 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1524541: Wed Jul 24 20:36:51 2024 00:08:56.112 read: IOPS=38, BW=152KiB/s (156kB/s)(484KiB/3179msec) 00:08:56.112 slat (usec): min=6, max=3882, avg=54.31, stdev=349.59 00:08:56.112 clat (usec): min=268, max=44976, avg=26032.46, stdev=19845.64 00:08:56.112 lat (usec): min=276, max=44994, avg=26087.12, stdev=19871.89 00:08:56.112 clat percentiles (usec): 00:08:56.112 | 1.00th=[ 293], 5.00th=[ 383], 10.00th=[ 433], 20.00th=[ 510], 00:08:56.112 | 30.00th=[ 553], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:08:56.112 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:56.112 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:08:56.112 | 99.99th=[44827] 00:08:56.112 bw ( KiB/s): min= 96, max= 288, per=8.91%, avg=156.00, stdev=78.67, samples=6 00:08:56.112 iops : min= 24, max= 72, avg=39.00, stdev=19.67, samples=6 00:08:56.112 lat (usec) : 500=18.03%, 750=18.03% 00:08:56.112 lat (msec) : 4=0.82%, 20=0.82%, 50=61.48% 00:08:56.112 cpu : usr=0.09%, sys=0.03%, ctx=124, majf=0, minf=1 00:08:56.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 issued rwts: total=122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.112 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1524542: Wed Jul 24 20:36:51 2024 00:08:56.112 read: IOPS=171, BW=686KiB/s (703kB/s)(2008KiB/2926msec) 00:08:56.112 slat (nsec): min=5050, max=41306, avg=14704.63, stdev=9526.00 00:08:56.112 clat (usec): min=219, max=42062, avg=5764.87, stdev=13984.51 00:08:56.112 lat (usec): min=226, max=42096, avg=5779.57, stdev=13988.42 00:08:56.112 clat percentiles (usec): 00:08:56.112 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:08:56.112 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 265], 60.00th=[ 285], 00:08:56.112 | 70.00th=[ 302], 80.00th=[ 420], 90.00th=[41157], 95.00th=[41681], 00:08:56.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:56.112 | 99.99th=[42206] 00:08:56.112 bw ( KiB/s): min= 144, max= 3184, per=44.07%, avg=772.80, stdev=1348.06, samples=5 00:08:56.112 iops : min= 36, max= 796, avg=193.20, stdev=337.02, samples=5 00:08:56.112 lat (usec) : 250=38.37%, 500=47.32%, 750=0.80% 00:08:56.112 lat (msec) : 50=13.32% 00:08:56.112 cpu : usr=0.10%, sys=0.31%, ctx=503, majf=0, minf=1 00:08:56.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:56.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.112 issued rwts: total=503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:56.112 00:08:56.112 Run status group 0 (all jobs): 00:08:56.113 READ: bw=1752KiB/s (1794kB/s), 139KiB/s-1021KiB/s (143kB/s-1045kB/s), io=6536KiB (6693kB), run=2926-3731msec 00:08:56.113 00:08:56.113 Disk stats (read/write): 00:08:56.113 nvme0n1: ios=725/0, merge=0/0, ticks=3595/0, in_queue=3595, util=98.97% 00:08:56.113 nvme0n2: ios=127/0, merge=0/0, ticks=3581/0, in_queue=3581, util=96.46% 00:08:56.113 nvme0n3: ios=119/0, merge=0/0, ticks=3069/0, in_queue=3069, util=96.72% 00:08:56.113 nvme0n4: ios=500/0, merge=0/0, ticks=2806/0, in_queue=2806, util=96.78% 00:08:56.113 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.113 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:56.370 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.370 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:56.628 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.628 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:56.886 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.886 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:57.143 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:57.143 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1524445 00:08:57.143 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:57.143 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:57.401 nvmf hotplug test: fio failed as expected 00:08:57.401 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.659 rmmod nvme_tcp 00:08:57.659 rmmod nvme_fabrics 00:08:57.659 rmmod nvme_keyring 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1522424 ']' 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1522424 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1522424 ']' 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1522424 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522424 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522424' 00:08:57.659 killing process with pid 1522424 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1522424 00:08:57.659 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1522424 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.918 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.448 00:09:00.448 real 0m23.430s 00:09:00.448 user 1m22.368s 00:09:00.448 sys 0m6.313s 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.448 ************************************ 00:09:00.448 END TEST nvmf_fio_target 00:09:00.448 ************************************ 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.448 ************************************ 00:09:00.448 START TEST nvmf_bdevio 00:09:00.448 ************************************ 00:09:00.448 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:00.449 * Looking for test storage... 00:09:00.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.449 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.348 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:09:02.349 00:09:02.349 --- 10.0.0.2 ping statistics --- 00:09:02.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.349 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:09:02.349 00:09:02.349 --- 10.0.0.1 ping statistics --- 00:09:02.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.349 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1527168 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1527168 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1527168 ']' 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.349 20:36:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.349 [2024-07-24 20:36:57.757693] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:09:02.349 [2024-07-24 20:36:57.757790] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.349 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.349 [2024-07-24 20:36:57.833923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.607 [2024-07-24 20:36:57.960178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.607 [2024-07-24 20:36:57.960251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.607 [2024-07-24 20:36:57.960270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.607 [2024-07-24 20:36:57.960283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.607 [2024-07-24 20:36:57.960299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.607 [2024-07-24 20:36:57.960384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:02.608 [2024-07-24 20:36:57.960659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:02.608 [2024-07-24 20:36:57.960714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:02.608 [2024-07-24 20:36:57.960717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.608 [2024-07-24 20:36:58.113772] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.608 Malloc0 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.608 [2024-07-24 20:36:58.166717] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:02.608 { 00:09:02.608 "params": { 00:09:02.608 "name": "Nvme$subsystem", 00:09:02.608 "trtype": "$TEST_TRANSPORT", 00:09:02.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:02.608 "adrfam": "ipv4", 00:09:02.608 "trsvcid": "$NVMF_PORT", 00:09:02.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:02.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:02.608 "hdgst": ${hdgst:-false}, 00:09:02.608 "ddgst": ${ddgst:-false} 00:09:02.608 }, 00:09:02.608 "method": "bdev_nvme_attach_controller" 00:09:02.608 } 00:09:02.608 EOF 00:09:02.608 )") 00:09:02.608 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:02.897 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:02.897 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:02.897 20:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:02.897 "params": { 00:09:02.897 "name": "Nvme1", 00:09:02.897 "trtype": "tcp", 00:09:02.897 "traddr": "10.0.0.2", 00:09:02.897 "adrfam": "ipv4", 00:09:02.897 "trsvcid": "4420", 00:09:02.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:02.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:02.897 "hdgst": false, 00:09:02.897 "ddgst": false 00:09:02.897 }, 00:09:02.897 "method": "bdev_nvme_attach_controller" 00:09:02.897 }' 00:09:02.897 [2024-07-24 20:36:58.214802] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:09:02.897 [2024-07-24 20:36:58.214871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527311 ] 00:09:02.897 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.897 [2024-07-24 20:36:58.274404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:02.897 [2024-07-24 20:36:58.390666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.897 [2024-07-24 20:36:58.390716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.897 [2024-07-24 20:36:58.390720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.160 I/O targets: 00:09:03.160 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:03.160 00:09:03.160 00:09:03.160 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.160 http://cunit.sourceforge.net/ 00:09:03.160 00:09:03.160 00:09:03.160 Suite: bdevio tests on: Nvme1n1 00:09:03.160 Test: blockdev write read block ...passed 00:09:03.160 Test: blockdev write zeroes read block ...passed 00:09:03.160 Test: blockdev write zeroes read no split ...passed 00:09:03.417 Test: blockdev write zeroes read split ...passed 00:09:03.417 Test: blockdev write zeroes read split partial ...passed 00:09:03.417 Test: blockdev reset ...[2024-07-24 20:36:58.769123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:03.417 [2024-07-24 20:36:58.769240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfac580 (9): Bad file descriptor 00:09:03.417 [2024-07-24 20:36:58.828565] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:03.417 passed 00:09:03.417 Test: blockdev write read 8 blocks ...passed 00:09:03.417 Test: blockdev write read size > 128k ...passed 00:09:03.417 Test: blockdev write read invalid size ...passed 00:09:03.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:03.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:03.417 Test: blockdev write read max offset ...passed 00:09:03.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:03.674 Test: blockdev writev readv 8 blocks ...passed 00:09:03.674 Test: blockdev writev readv 30 x 1block ...passed 00:09:03.675 Test: blockdev writev readv block ...passed 00:09:03.675 Test: blockdev writev readv size > 128k ...passed 00:09:03.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:03.675 Test: blockdev comparev and writev ...[2024-07-24 20:36:59.040881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.040927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.040950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.040966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.041310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.041335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.041357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.041372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.041680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.041704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.041725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.041741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.042057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.042081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.042102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:03.675 [2024-07-24 20:36:59.042117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:03.675 passed 00:09:03.675 Test: blockdev nvme passthru rw ...passed 00:09:03.675 Test: blockdev nvme passthru vendor specific ...[2024-07-24 20:36:59.124500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:03.675 [2024-07-24 20:36:59.124528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.124688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:03.675 [2024-07-24 20:36:59.124711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.124868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:03.675 [2024-07-24 20:36:59.124891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:03.675 [2024-07-24 20:36:59.125046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:03.675 [2024-07-24 20:36:59.125069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:03.675 passed 00:09:03.675 Test: blockdev nvme admin passthru ...passed 00:09:03.675 Test: blockdev copy ...passed 00:09:03.675 00:09:03.675 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.675 suites 1 1 n/a 0 0 00:09:03.675 tests 23 23 23 0 0 00:09:03.675 asserts 152 152 152 0 n/a 00:09:03.675 00:09:03.675 Elapsed time = 1.142 seconds 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.932 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.933 rmmod nvme_tcp 00:09:03.933 rmmod nvme_fabrics 00:09:03.933 rmmod nvme_keyring 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1527168 ']' 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1527168 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1527168 ']' 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1527168 00:09:03.933 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:04.190 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.190 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527168 00:09:04.190 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:04.190 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:04.190 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527168' 00:09:04.190 killing process with pid 1527168 00:09:04.190 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1527168 00:09:04.190 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1527168 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.448 20:36:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.351 20:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.351 00:09:06.351 real 0m6.382s 00:09:06.351 user 0m10.275s 00:09:06.351 sys 0m2.060s 00:09:06.351 20:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.351 20:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.351 ************************************ 00:09:06.351 END TEST nvmf_bdevio 00:09:06.351 ************************************ 00:09:06.351 20:37:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:06.351 00:09:06.351 real 3m55.708s 00:09:06.351 user 10m9.285s 00:09:06.351 sys 1m7.163s 00:09:06.351 20:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.351 20:37:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.351 ************************************ 00:09:06.351 END TEST nvmf_target_core 00:09:06.351 ************************************ 00:09:06.610 20:37:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:06.610 20:37:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:06.610 20:37:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.610 20:37:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.610 ************************************ 00:09:06.610 START TEST nvmf_target_extra 00:09:06.610 ************************************ 00:09:06.610 20:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:06.610 * Looking for test storage... 00:09:06.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:06.610 ************************************ 00:09:06.610 START TEST nvmf_example 00:09:06.610 ************************************ 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:06.610 * Looking for test storage... 00:09:06.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.610 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.611 20:37:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.529 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.530 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.530 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.530 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:09:08.788 00:09:08.788 --- 10.0.0.2 ping statistics --- 00:09:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.788 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:09:08.788 00:09:08.788 --- 10.0.0.1 ping statistics --- 00:09:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.788 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1529436 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1529436 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1529436 ']' 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.788 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:08.788 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.721 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.721 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:09.721 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.722 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:09.979 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:09.979 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.172 Initializing NVMe Controllers 00:09:22.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:22.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:22.172 Initialization complete. Launching workers. 00:09:22.172 ======================================================== 00:09:22.172 Latency(us) 00:09:22.172 Device Information : IOPS MiB/s Average min max 00:09:22.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14680.30 57.34 4359.68 893.47 15265.58 00:09:22.172 ======================================================== 00:09:22.172 Total : 14680.30 57.34 4359.68 893.47 15265.58 00:09:22.172 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.172 rmmod nvme_tcp 00:09:22.172 rmmod nvme_fabrics 00:09:22.172 rmmod nvme_keyring 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1529436 ']' 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1529436 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1529436 ']' 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1529436 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529436 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529436' 00:09:22.172 killing process with pid 1529436 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1529436 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1529436 00:09:22.172 nvmf threads initialize successfully 00:09:22.172 bdev subsystem init successfully 00:09:22.172 created a nvmf target service 00:09:22.172 create targets's poll groups done 00:09:22.172 all subsystems of target started 00:09:22.172 nvmf target is running 00:09:22.172 all subsystems of target stopped 00:09:22.172 destroy targets's poll groups done 00:09:22.172 destroyed the nvmf target service 00:09:22.172 bdev subsystem finish successfully 00:09:22.172 nvmf threads destroy successfully 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.172 20:37:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:22.743 00:09:22.743 real 0m16.004s 00:09:22.743 user 0m44.787s 00:09:22.743 sys 0m3.614s 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:22.743 ************************************ 00:09:22.743 END TEST nvmf_example 00:09:22.743 ************************************ 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:22.743 ************************************ 00:09:22.743 START TEST nvmf_filesystem 00:09:22.743 ************************************ 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:22.743 * Looking for test storage... 00:09:22.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:22.743 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:22.744 #define SPDK_CONFIG_H 00:09:22.744 #define SPDK_CONFIG_APPS 1 00:09:22.744 #define SPDK_CONFIG_ARCH native 00:09:22.744 #undef SPDK_CONFIG_ASAN 00:09:22.744 #undef SPDK_CONFIG_AVAHI 00:09:22.744 #undef SPDK_CONFIG_CET 00:09:22.744 #define SPDK_CONFIG_COVERAGE 1 00:09:22.744 #define SPDK_CONFIG_CROSS_PREFIX 00:09:22.744 #undef SPDK_CONFIG_CRYPTO 00:09:22.744 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:22.744 #undef SPDK_CONFIG_CUSTOMOCF 00:09:22.744 #undef SPDK_CONFIG_DAOS 00:09:22.744 #define SPDK_CONFIG_DAOS_DIR 00:09:22.744 #define SPDK_CONFIG_DEBUG 1 00:09:22.744 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:22.744 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:22.744 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:22.744 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:22.744 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:22.744 #undef SPDK_CONFIG_DPDK_UADK 00:09:22.744 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:22.744 #define SPDK_CONFIG_EXAMPLES 1 00:09:22.744 #undef SPDK_CONFIG_FC 00:09:22.744 #define SPDK_CONFIG_FC_PATH 00:09:22.744 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:22.744 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:22.744 #undef SPDK_CONFIG_FUSE 00:09:22.744 #undef SPDK_CONFIG_FUZZER 00:09:22.744 #define SPDK_CONFIG_FUZZER_LIB 00:09:22.744 #undef SPDK_CONFIG_GOLANG 00:09:22.744 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:22.744 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:22.744 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:22.744 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:22.744 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:22.744 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:22.744 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:22.744 #define SPDK_CONFIG_IDXD 1 00:09:22.744 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:22.744 #undef SPDK_CONFIG_IPSEC_MB 00:09:22.744 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:22.744 #define SPDK_CONFIG_ISAL 1 00:09:22.744 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:22.744 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:22.744 #define SPDK_CONFIG_LIBDIR 00:09:22.744 #undef SPDK_CONFIG_LTO 00:09:22.744 #define SPDK_CONFIG_MAX_LCORES 128 00:09:22.744 #define SPDK_CONFIG_NVME_CUSE 1 00:09:22.744 #undef SPDK_CONFIG_OCF 00:09:22.744 #define SPDK_CONFIG_OCF_PATH 00:09:22.744 #define SPDK_CONFIG_OPENSSL_PATH 00:09:22.744 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:22.744 #define SPDK_CONFIG_PGO_DIR 00:09:22.744 #undef SPDK_CONFIG_PGO_USE 00:09:22.744 #define SPDK_CONFIG_PREFIX /usr/local 00:09:22.744 #undef SPDK_CONFIG_RAID5F 00:09:22.744 #undef SPDK_CONFIG_RBD 00:09:22.744 #define SPDK_CONFIG_RDMA 1 00:09:22.744 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:22.744 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:22.744 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:22.744 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:22.744 #define SPDK_CONFIG_SHARED 1 00:09:22.744 #undef SPDK_CONFIG_SMA 00:09:22.744 #define SPDK_CONFIG_TESTS 1 00:09:22.744 #undef SPDK_CONFIG_TSAN 00:09:22.744 #define SPDK_CONFIG_UBLK 1 00:09:22.744 #define SPDK_CONFIG_UBSAN 1 00:09:22.744 #undef SPDK_CONFIG_UNIT_TESTS 00:09:22.744 #undef SPDK_CONFIG_URING 00:09:22.744 #define SPDK_CONFIG_URING_PATH 00:09:22.744 #undef SPDK_CONFIG_URING_ZNS 00:09:22.744 #undef SPDK_CONFIG_USDT 00:09:22.744 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:22.744 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:22.744 #define SPDK_CONFIG_VFIO_USER 1 00:09:22.744 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:22.744 #define SPDK_CONFIG_VHOST 1 00:09:22.744 #define SPDK_CONFIG_VIRTIO 1 00:09:22.744 #undef SPDK_CONFIG_VTUNE 00:09:22.744 #define SPDK_CONFIG_VTUNE_DIR 00:09:22.744 #define SPDK_CONFIG_WERROR 1 00:09:22.744 #define SPDK_CONFIG_WPDK_DIR 00:09:22.744 #undef SPDK_CONFIG_XNVME 00:09:22.744 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.744 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:22.745 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:22.746 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1531141 ]] 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1531141 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.hJlvUJ 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hJlvUJ/tests/target /tmp/spdk.hJlvUJ 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55530278912 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994729472 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6464450560 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30987444224 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376539136 00:09:22.747 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398948352 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996742144 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997364736 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=622592 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199468032 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199472128 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:22.748 * Looking for test storage... 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55530278912 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8679043072 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.748 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.749 20:37:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:25.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:25.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:25.282 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:25.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.282 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:25.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:09:25.283 00:09:25.283 --- 10.0.0.2 ping statistics --- 00:09:25.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.283 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:09:25.283 00:09:25.283 --- 10.0.0.1 ping statistics --- 00:09:25.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.283 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.283 ************************************ 00:09:25.283 START TEST nvmf_filesystem_no_in_capsule 00:09:25.283 ************************************ 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1532763 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1532763 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1532763 ']' 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.283 20:37:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:25.283 [2024-07-24 20:37:20.565139] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:09:25.283 [2024-07-24 20:37:20.565209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.283 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.283 [2024-07-24 20:37:20.634988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.283 [2024-07-24 20:37:20.756424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.283 [2024-07-24 20:37:20.756485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.283 [2024-07-24 20:37:20.756501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.283 [2024-07-24 20:37:20.756514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.283 [2024-07-24 20:37:20.756525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.283 [2024-07-24 20:37:20.756621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.283 [2024-07-24 20:37:20.756678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.283 [2024-07-24 20:37:20.756728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.283 [2024-07-24 20:37:20.756731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.216 [2024-07-24 20:37:21.586940] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.216 Malloc1 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.216 [2024-07-24 20:37:21.775411] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:26.216 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:26.474 { 00:09:26.474 "name": "Malloc1", 00:09:26.474 "aliases": [ 00:09:26.474 "7c6fc17d-2440-4729-b191-3c18b4d89a19" 00:09:26.474 ], 00:09:26.474 "product_name": "Malloc disk", 00:09:26.474 "block_size": 512, 00:09:26.474 "num_blocks": 1048576, 00:09:26.474 "uuid": "7c6fc17d-2440-4729-b191-3c18b4d89a19", 00:09:26.474 "assigned_rate_limits": { 00:09:26.474 "rw_ios_per_sec": 0, 00:09:26.474 "rw_mbytes_per_sec": 0, 00:09:26.474 "r_mbytes_per_sec": 0, 00:09:26.474 "w_mbytes_per_sec": 0 00:09:26.474 }, 00:09:26.474 "claimed": true, 00:09:26.474 "claim_type": "exclusive_write", 00:09:26.474 "zoned": false, 00:09:26.474 "supported_io_types": { 00:09:26.474 "read": true, 00:09:26.474 "write": true, 00:09:26.474 "unmap": true, 00:09:26.474 "flush": true, 00:09:26.474 "reset": true, 00:09:26.474 "nvme_admin": false, 00:09:26.474 "nvme_io": false, 00:09:26.474 "nvme_io_md": false, 00:09:26.474 "write_zeroes": true, 00:09:26.474 "zcopy": true, 00:09:26.474 "get_zone_info": false, 00:09:26.474 "zone_management": false, 00:09:26.474 "zone_append": false, 00:09:26.474 "compare": false, 00:09:26.474 "compare_and_write": false, 00:09:26.474 "abort": true, 00:09:26.474 "seek_hole": false, 00:09:26.474 "seek_data": false, 00:09:26.474 "copy": true, 00:09:26.474 "nvme_iov_md": false 00:09:26.474 }, 00:09:26.474 "memory_domains": [ 00:09:26.474 { 00:09:26.474 "dma_device_id": "system", 00:09:26.474 "dma_device_type": 1 00:09:26.474 }, 00:09:26.474 { 00:09:26.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.474 "dma_device_type": 2 00:09:26.474 } 00:09:26.474 ], 00:09:26.474 "driver_specific": {} 00:09:26.474 } 00:09:26.474 ]' 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:26.474 20:37:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.039 20:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.039 20:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:27.039 20:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.039 20:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:27.039 20:37:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:29.611 20:37:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:29.611 20:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:30.177 20:37:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.109 ************************************ 00:09:31.109 START TEST filesystem_ext4 00:09:31.109 ************************************ 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:31.109 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:31.109 mke2fs 1.46.5 (30-Dec-2021) 00:09:31.109 Discarding device blocks: 0/522240 done 00:09:31.109 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:31.109 Filesystem UUID: d869e164-c7a1-4188-9946-88fca91c2c67 00:09:31.109 Superblock backups stored on blocks: 00:09:31.109 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:31.109 00:09:31.109 Allocating group tables: 0/64 done 00:09:31.109 Writing inode tables: 0/64 done 00:09:31.367 Creating journal (8192 blocks): done 00:09:31.367 Writing superblocks and filesystem accounting information: 0/64 done 00:09:31.367 00:09:31.367 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:31.367 20:37:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1532763 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:31.932 00:09:31.932 real 0m0.951s 00:09:31.932 user 0m0.016s 00:09:31.932 sys 0m0.054s 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:31.932 ************************************ 00:09:31.932 END TEST filesystem_ext4 00:09:31.932 ************************************ 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.932 ************************************ 00:09:31.932 START TEST filesystem_btrfs 00:09:31.932 ************************************ 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:31.932 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:32.497 btrfs-progs v6.6.2 00:09:32.497 See https://btrfs.readthedocs.io for more information. 00:09:32.497 00:09:32.497 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:32.497 NOTE: several default settings have changed in version 5.15, please make sure 00:09:32.497 this does not affect your deployments: 00:09:32.497 - DUP for metadata (-m dup) 00:09:32.497 - enabled no-holes (-O no-holes) 00:09:32.497 - enabled free-space-tree (-R free-space-tree) 00:09:32.497 00:09:32.497 Label: (null) 00:09:32.497 UUID: 30831277-0d54-46b6-82db-0a8aa407eae6 00:09:32.497 Node size: 16384 00:09:32.497 Sector size: 4096 00:09:32.497 Filesystem size: 510.00MiB 00:09:32.497 Block group profiles: 00:09:32.497 Data: single 8.00MiB 00:09:32.497 Metadata: DUP 32.00MiB 00:09:32.497 System: DUP 8.00MiB 00:09:32.497 SSD detected: yes 00:09:32.497 Zoned device: no 00:09:32.497 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:32.497 Runtime features: free-space-tree 00:09:32.497 Checksum: crc32c 00:09:32.497 Number of devices: 1 00:09:32.497 Devices: 00:09:32.497 ID SIZE PATH 00:09:32.497 1 510.00MiB /dev/nvme0n1p1 00:09:32.497 00:09:32.497 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:32.497 20:37:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1532763 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:33.428 00:09:33.428 real 0m1.355s 00:09:33.428 user 0m0.017s 00:09:33.428 sys 0m0.113s 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:33.428 ************************************ 00:09:33.428 END TEST filesystem_btrfs 00:09:33.428 ************************************ 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:33.428 ************************************ 00:09:33.428 START TEST filesystem_xfs 00:09:33.428 ************************************ 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:33.428 20:37:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:33.428 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:33.428 = sectsz=512 attr=2, projid32bit=1 00:09:33.428 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:33.428 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:33.428 data = bsize=4096 blocks=130560, imaxpct=25 00:09:33.428 = sunit=0 swidth=0 blks 00:09:33.428 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:33.428 log =internal log bsize=4096 blocks=16384, version=2 00:09:33.428 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:33.428 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:34.797 Discarding blocks...Done. 00:09:34.797 20:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:34.797 20:37:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:36.692 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1532763 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:36.949 00:09:36.949 real 0m3.477s 00:09:36.949 user 0m0.018s 00:09:36.949 sys 0m0.061s 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:36.949 ************************************ 00:09:36.949 END TEST filesystem_xfs 00:09:36.949 ************************************ 00:09:36.949 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1532763 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1532763 ']' 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1532763 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1532763 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1532763' 00:09:37.207 killing process with pid 1532763 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1532763 00:09:37.207 20:37:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1532763 00:09:37.771 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:37.771 00:09:37.771 real 0m12.719s 00:09:37.771 user 0m48.847s 00:09:37.771 sys 0m1.887s 00:09:37.771 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.771 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.771 ************************************ 00:09:37.771 END TEST nvmf_filesystem_no_in_capsule 00:09:37.771 ************************************ 00:09:37.771 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 ************************************ 00:09:37.772 START TEST nvmf_filesystem_in_capsule 00:09:37.772 ************************************ 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1534569 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1534569 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1534569 ']' 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.772 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 [2024-07-24 20:37:33.331110] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:09:37.772 [2024-07-24 20:37:33.331203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.029 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.029 [2024-07-24 20:37:33.398664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.029 [2024-07-24 20:37:33.509712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.029 [2024-07-24 20:37:33.509769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.029 [2024-07-24 20:37:33.509798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.029 [2024-07-24 20:37:33.509809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.029 [2024-07-24 20:37:33.509818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.029 [2024-07-24 20:37:33.509905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.029 [2024-07-24 20:37:33.509971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.029 [2024-07-24 20:37:33.510035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.029 [2024-07-24 20:37:33.510038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.287 [2024-07-24 20:37:33.664489] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.287 Malloc1 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.287 [2024-07-24 20:37:33.844154] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.287 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:38.545 { 00:09:38.545 "name": "Malloc1", 00:09:38.545 "aliases": [ 00:09:38.545 "a8164fdb-d29c-4c04-b543-3147ff9773bb" 00:09:38.545 ], 00:09:38.545 "product_name": "Malloc disk", 00:09:38.545 "block_size": 512, 00:09:38.545 "num_blocks": 1048576, 00:09:38.545 "uuid": "a8164fdb-d29c-4c04-b543-3147ff9773bb", 00:09:38.545 "assigned_rate_limits": { 00:09:38.545 "rw_ios_per_sec": 0, 00:09:38.545 "rw_mbytes_per_sec": 0, 00:09:38.545 "r_mbytes_per_sec": 0, 00:09:38.545 "w_mbytes_per_sec": 0 00:09:38.545 }, 00:09:38.545 "claimed": true, 00:09:38.545 "claim_type": "exclusive_write", 00:09:38.545 "zoned": false, 00:09:38.545 "supported_io_types": { 00:09:38.545 "read": true, 00:09:38.545 "write": true, 00:09:38.545 "unmap": true, 00:09:38.545 "flush": true, 00:09:38.545 "reset": true, 00:09:38.545 "nvme_admin": false, 00:09:38.545 "nvme_io": false, 00:09:38.545 "nvme_io_md": false, 00:09:38.545 "write_zeroes": true, 00:09:38.545 "zcopy": true, 00:09:38.545 "get_zone_info": false, 00:09:38.545 "zone_management": false, 00:09:38.545 "zone_append": false, 00:09:38.545 "compare": false, 00:09:38.545 "compare_and_write": false, 00:09:38.545 "abort": true, 00:09:38.545 "seek_hole": false, 00:09:38.545 "seek_data": false, 00:09:38.545 "copy": true, 00:09:38.545 "nvme_iov_md": false 00:09:38.545 }, 00:09:38.545 "memory_domains": [ 00:09:38.545 { 00:09:38.545 "dma_device_id": "system", 00:09:38.545 "dma_device_type": 1 00:09:38.545 }, 00:09:38.545 { 00:09:38.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.545 "dma_device_type": 2 00:09:38.545 } 00:09:38.545 ], 00:09:38.545 "driver_specific": {} 00:09:38.545 } 00:09:38.545 ]' 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:38.545 20:37:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.110 20:37:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.110 20:37:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:39.110 20:37:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.110 20:37:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:39.110 20:37:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:41.635 20:37:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:41.893 20:37:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.264 ************************************ 00:09:43.264 START TEST filesystem_in_capsule_ext4 00:09:43.264 ************************************ 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:43.264 20:37:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:43.264 mke2fs 1.46.5 (30-Dec-2021) 00:09:43.264 Discarding device blocks: 0/522240 done 00:09:43.264 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:43.264 Filesystem UUID: 7d02ae59-d0d5-4595-8c3f-59e462986019 00:09:43.264 Superblock backups stored on blocks: 00:09:43.264 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:43.264 00:09:43.264 Allocating group tables: 0/64 done 00:09:43.264 Writing inode tables: 0/64 done 00:09:43.264 Creating journal (8192 blocks): done 00:09:44.344 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:09:44.344 00:09:44.344 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:44.344 20:37:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:44.909 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:44.909 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1534569 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:45.166 00:09:45.166 real 0m2.074s 00:09:45.166 user 0m0.014s 00:09:45.166 sys 0m0.057s 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:45.166 ************************************ 00:09:45.166 END TEST filesystem_in_capsule_ext4 00:09:45.166 ************************************ 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.166 ************************************ 00:09:45.166 START TEST filesystem_in_capsule_btrfs 00:09:45.166 ************************************ 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:45.166 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:45.166 btrfs-progs v6.6.2 00:09:45.166 See https://btrfs.readthedocs.io for more information. 00:09:45.166 00:09:45.166 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:45.166 NOTE: several default settings have changed in version 5.15, please make sure 00:09:45.166 this does not affect your deployments: 00:09:45.166 - DUP for metadata (-m dup) 00:09:45.166 - enabled no-holes (-O no-holes) 00:09:45.166 - enabled free-space-tree (-R free-space-tree) 00:09:45.166 00:09:45.166 Label: (null) 00:09:45.166 UUID: 62e175db-4c40-40f3-bb11-08905e7b4869 00:09:45.166 Node size: 16384 00:09:45.166 Sector size: 4096 00:09:45.166 Filesystem size: 510.00MiB 00:09:45.166 Block group profiles: 00:09:45.166 Data: single 8.00MiB 00:09:45.166 Metadata: DUP 32.00MiB 00:09:45.166 System: DUP 8.00MiB 00:09:45.166 SSD detected: yes 00:09:45.166 Zoned device: no 00:09:45.166 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:45.166 Runtime features: free-space-tree 00:09:45.166 Checksum: crc32c 00:09:45.166 Number of devices: 1 00:09:45.167 Devices: 00:09:45.167 ID SIZE PATH 00:09:45.167 1 510.00MiB /dev/nvme0n1p1 00:09:45.167 00:09:45.167 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:45.167 20:37:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1534569 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:45.730 00:09:45.730 real 0m0.514s 00:09:45.730 user 0m0.008s 00:09:45.730 sys 0m0.117s 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:45.730 ************************************ 00:09:45.730 END TEST filesystem_in_capsule_btrfs 00:09:45.730 ************************************ 00:09:45.730 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.731 ************************************ 00:09:45.731 START TEST filesystem_in_capsule_xfs 00:09:45.731 ************************************ 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:45.731 20:37:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:45.731 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:45.731 = sectsz=512 attr=2, projid32bit=1 00:09:45.731 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:45.731 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:45.731 data = bsize=4096 blocks=130560, imaxpct=25 00:09:45.731 = sunit=0 swidth=0 blks 00:09:45.731 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:45.731 log =internal log bsize=4096 blocks=16384, version=2 00:09:45.731 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:45.731 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:46.662 Discarding blocks...Done. 00:09:46.662 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:46.662 20:37:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1534569 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:49.215 00:09:49.215 real 0m3.141s 00:09:49.215 user 0m0.022s 00:09:49.215 sys 0m0.057s 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:49.215 ************************************ 00:09:49.215 END TEST filesystem_in_capsule_xfs 00:09:49.215 ************************************ 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:49.215 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1534569 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1534569 ']' 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1534569 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1534569 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1534569' 00:09:49.216 killing process with pid 1534569 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1534569 00:09:49.216 20:37:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1534569 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:49.474 00:09:49.474 real 0m11.728s 00:09:49.474 user 0m44.758s 00:09:49.474 sys 0m1.757s 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.474 ************************************ 00:09:49.474 END TEST nvmf_filesystem_in_capsule 00:09:49.474 ************************************ 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.474 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.474 rmmod nvme_tcp 00:09:49.733 rmmod nvme_fabrics 00:09:49.733 rmmod nvme_keyring 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.733 20:37:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.637 00:09:51.637 real 0m29.014s 00:09:51.637 user 1m34.534s 00:09:51.637 sys 0m5.287s 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.637 ************************************ 00:09:51.637 END TEST nvmf_filesystem 00:09:51.637 ************************************ 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:51.637 ************************************ 00:09:51.637 START TEST nvmf_target_discovery 00:09:51.637 ************************************ 00:09:51.637 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:51.896 * Looking for test storage... 00:09:51.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:51.896 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.897 20:37:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.799 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:53.800 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:53.800 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:53.800 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:53.800 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.800 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:09:54.058 00:09:54.058 --- 10.0.0.2 ping statistics --- 00:09:54.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.058 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:09:54.058 00:09:54.058 --- 10.0.0.1 ping statistics --- 00:09:54.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.058 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1538041 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1538041 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1538041 ']' 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.058 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.058 [2024-07-24 20:37:49.552692] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:09:54.058 [2024-07-24 20:37:49.552790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.058 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.058 [2024-07-24 20:37:49.618843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.316 [2024-07-24 20:37:49.729553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.316 [2024-07-24 20:37:49.729607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.316 [2024-07-24 20:37:49.729637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.316 [2024-07-24 20:37:49.729648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.316 [2024-07-24 20:37:49.729658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.316 [2024-07-24 20:37:49.729740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.316 [2024-07-24 20:37:49.729808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.316 [2024-07-24 20:37:49.729856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.316 [2024-07-24 20:37:49.729859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.316 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.316 [2024-07-24 20:37:49.877452] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 Null1 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 [2024-07-24 20:37:49.917747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 Null2 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 Null3 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.574 Null4 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.574 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:54.575 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.575 20:37:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.575 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:54.832 00:09:54.832 Discovery Log Number of Records 6, Generation counter 6 00:09:54.832 =====Discovery Log Entry 0====== 00:09:54.832 trtype: tcp 00:09:54.832 adrfam: ipv4 00:09:54.832 subtype: current discovery subsystem 00:09:54.832 treq: not required 00:09:54.832 portid: 0 00:09:54.832 trsvcid: 4420 00:09:54.832 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:54.832 traddr: 10.0.0.2 00:09:54.832 eflags: explicit discovery connections, duplicate discovery information 00:09:54.832 sectype: none 00:09:54.832 =====Discovery Log Entry 1====== 00:09:54.832 trtype: tcp 00:09:54.832 adrfam: ipv4 00:09:54.832 subtype: nvme subsystem 00:09:54.832 treq: not required 00:09:54.832 portid: 0 00:09:54.832 trsvcid: 4420 00:09:54.832 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:54.832 traddr: 10.0.0.2 00:09:54.832 eflags: none 00:09:54.832 sectype: none 00:09:54.832 =====Discovery Log Entry 2====== 00:09:54.832 trtype: tcp 00:09:54.832 adrfam: ipv4 00:09:54.832 subtype: nvme subsystem 00:09:54.832 treq: not required 00:09:54.832 portid: 0 00:09:54.832 trsvcid: 4420 00:09:54.832 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:54.832 traddr: 10.0.0.2 00:09:54.832 eflags: none 00:09:54.832 sectype: none 00:09:54.832 =====Discovery Log Entry 3====== 00:09:54.832 trtype: tcp 00:09:54.832 adrfam: ipv4 00:09:54.832 subtype: nvme subsystem 00:09:54.832 treq: not required 00:09:54.832 portid: 0 00:09:54.832 trsvcid: 4420 00:09:54.832 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:54.832 traddr: 10.0.0.2 00:09:54.832 eflags: none 00:09:54.832 sectype: none 00:09:54.832 =====Discovery Log Entry 4====== 00:09:54.832 trtype: tcp 00:09:54.832 adrfam: ipv4 00:09:54.832 subtype: nvme subsystem 00:09:54.832 treq: not required 00:09:54.832 portid: 0 00:09:54.832 trsvcid: 4420 00:09:54.832 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:54.832 traddr: 10.0.0.2 00:09:54.832 eflags: none 00:09:54.832 sectype: none 00:09:54.832 =====Discovery Log Entry 5====== 00:09:54.832 trtype: tcp 00:09:54.832 adrfam: ipv4 00:09:54.832 subtype: discovery subsystem referral 00:09:54.832 treq: not required 00:09:54.832 portid: 0 00:09:54.832 trsvcid: 4430 00:09:54.832 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:54.832 traddr: 10.0.0.2 00:09:54.832 eflags: none 00:09:54.832 sectype: none 00:09:54.832 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:54.832 Perform nvmf subsystem discovery via RPC 00:09:54.832 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:54.832 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.832 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.832 [ 00:09:54.832 { 00:09:54.832 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:54.832 "subtype": "Discovery", 00:09:54.832 "listen_addresses": [ 00:09:54.832 { 00:09:54.832 "trtype": "TCP", 00:09:54.832 "adrfam": "IPv4", 00:09:54.832 "traddr": "10.0.0.2", 00:09:54.832 "trsvcid": "4420" 00:09:54.832 } 00:09:54.832 ], 00:09:54.832 "allow_any_host": true, 00:09:54.832 "hosts": [] 00:09:54.832 }, 00:09:54.832 { 00:09:54.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.832 "subtype": "NVMe", 00:09:54.832 "listen_addresses": [ 00:09:54.832 { 00:09:54.832 "trtype": "TCP", 00:09:54.832 "adrfam": "IPv4", 00:09:54.832 "traddr": "10.0.0.2", 00:09:54.832 "trsvcid": "4420" 00:09:54.832 } 00:09:54.832 ], 00:09:54.832 "allow_any_host": true, 00:09:54.832 "hosts": [], 00:09:54.832 "serial_number": "SPDK00000000000001", 00:09:54.832 "model_number": "SPDK bdev Controller", 00:09:54.832 "max_namespaces": 32, 00:09:54.832 "min_cntlid": 1, 00:09:54.832 "max_cntlid": 65519, 00:09:54.832 "namespaces": [ 00:09:54.832 { 00:09:54.832 "nsid": 1, 00:09:54.832 "bdev_name": "Null1", 00:09:54.832 "name": "Null1", 00:09:54.832 "nguid": "6BDA7F6C462A4327BE4A66271CE5AFEB", 00:09:54.832 "uuid": "6bda7f6c-462a-4327-be4a-66271ce5afeb" 00:09:54.832 } 00:09:54.832 ] 00:09:54.832 }, 00:09:54.832 { 00:09:54.832 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:54.832 "subtype": "NVMe", 00:09:54.832 "listen_addresses": [ 00:09:54.832 { 00:09:54.832 "trtype": "TCP", 00:09:54.832 "adrfam": "IPv4", 00:09:54.832 "traddr": "10.0.0.2", 00:09:54.832 "trsvcid": "4420" 00:09:54.832 } 00:09:54.832 ], 00:09:54.832 "allow_any_host": true, 00:09:54.832 "hosts": [], 00:09:54.832 "serial_number": "SPDK00000000000002", 00:09:54.832 "model_number": "SPDK bdev Controller", 00:09:54.832 "max_namespaces": 32, 00:09:54.832 "min_cntlid": 1, 00:09:54.832 "max_cntlid": 65519, 00:09:54.832 "namespaces": [ 00:09:54.832 { 00:09:54.832 "nsid": 1, 00:09:54.832 "bdev_name": "Null2", 00:09:54.832 "name": "Null2", 00:09:54.832 "nguid": "C069538F25014F76B63F71B6DF3B4078", 00:09:54.832 "uuid": "c069538f-2501-4f76-b63f-71b6df3b4078" 00:09:54.832 } 00:09:54.832 ] 00:09:54.832 }, 00:09:54.832 { 00:09:54.832 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:54.832 "subtype": "NVMe", 00:09:54.832 "listen_addresses": [ 00:09:54.832 { 00:09:54.832 "trtype": "TCP", 00:09:54.832 "adrfam": "IPv4", 00:09:54.832 "traddr": "10.0.0.2", 00:09:54.832 "trsvcid": "4420" 00:09:54.832 } 00:09:54.832 ], 00:09:54.832 "allow_any_host": true, 00:09:54.832 "hosts": [], 00:09:54.832 "serial_number": "SPDK00000000000003", 00:09:54.832 "model_number": "SPDK bdev Controller", 00:09:54.832 "max_namespaces": 32, 00:09:54.832 "min_cntlid": 1, 00:09:54.832 "max_cntlid": 65519, 00:09:54.832 "namespaces": [ 00:09:54.832 { 00:09:54.832 "nsid": 1, 00:09:54.832 "bdev_name": "Null3", 00:09:54.832 "name": "Null3", 00:09:54.832 "nguid": "A074E346377F4BC488E9732E3B51357C", 00:09:54.832 "uuid": "a074e346-377f-4bc4-88e9-732e3b51357c" 00:09:54.833 } 00:09:54.833 ] 00:09:54.833 }, 00:09:54.833 { 00:09:54.833 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:54.833 "subtype": "NVMe", 00:09:54.833 "listen_addresses": [ 00:09:54.833 { 00:09:54.833 "trtype": "TCP", 00:09:54.833 "adrfam": "IPv4", 00:09:54.833 "traddr": "10.0.0.2", 00:09:54.833 "trsvcid": "4420" 00:09:54.833 } 00:09:54.833 ], 00:09:54.833 "allow_any_host": true, 00:09:54.833 "hosts": [], 00:09:54.833 "serial_number": "SPDK00000000000004", 00:09:54.833 "model_number": "SPDK bdev Controller", 00:09:54.833 "max_namespaces": 32, 00:09:54.833 "min_cntlid": 1, 00:09:54.833 "max_cntlid": 65519, 00:09:54.833 "namespaces": [ 00:09:54.833 { 00:09:54.833 "nsid": 1, 00:09:54.833 "bdev_name": "Null4", 00:09:54.833 "name": "Null4", 00:09:54.833 "nguid": "3F6625A88B124AAFB0A4F3B4F2969D4E", 00:09:54.833 "uuid": "3f6625a8-8b12-4aaf-b0a4-f3b4f2969d4e" 00:09:54.833 } 00:09:54.833 ] 00:09:54.833 } 00:09:54.833 ] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.833 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.833 rmmod nvme_tcp 00:09:54.833 rmmod nvme_fabrics 00:09:55.090 rmmod nvme_keyring 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1538041 ']' 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1538041 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1538041 ']' 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1538041 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1538041 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1538041' 00:09:55.090 killing process with pid 1538041 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1538041 00:09:55.090 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1538041 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.349 20:37:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.249 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.249 00:09:57.249 real 0m5.623s 00:09:57.249 user 0m4.758s 00:09:57.249 sys 0m1.862s 00:09:57.249 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.249 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.249 ************************************ 00:09:57.249 END TEST nvmf_target_discovery 00:09:57.249 ************************************ 00:09:57.249 20:37:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:57.249 20:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.249 20:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.249 20:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:57.507 ************************************ 00:09:57.507 START TEST nvmf_referrals 00:09:57.507 ************************************ 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:57.507 * Looking for test storage... 00:09:57.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.507 20:37:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:59.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:59.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:59.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:59.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.406 20:37:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:09:59.665 00:09:59.665 --- 10.0.0.2 ping statistics --- 00:09:59.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.665 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:09:59.665 00:09:59.665 --- 10.0.0.1 ping statistics --- 00:09:59.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.665 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1540124 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1540124 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1540124 ']' 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.665 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.665 [2024-07-24 20:37:55.101694] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:09:59.665 [2024-07-24 20:37:55.101791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.665 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.665 [2024-07-24 20:37:55.174093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.923 [2024-07-24 20:37:55.292841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.923 [2024-07-24 20:37:55.292891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.923 [2024-07-24 20:37:55.292920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.924 [2024-07-24 20:37:55.292932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.924 [2024-07-24 20:37:55.292942] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.924 [2024-07-24 20:37:55.293008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.924 [2024-07-24 20:37:55.293115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.924 [2024-07-24 20:37:55.293162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.924 [2024-07-24 20:37:55.293165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.924 [2024-07-24 20:37:55.451802] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.924 [2024-07-24 20:37:55.464046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.924 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:00.182 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:00.440 20:37:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:00.698 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:00.698 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:00.698 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:00.698 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:00.698 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:00.698 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:00.698 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:00.955 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:00.956 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:01.214 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:01.471 20:37:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:01.471 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:01.471 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.472 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.472 rmmod nvme_tcp 00:10:01.729 rmmod nvme_fabrics 00:10:01.729 rmmod nvme_keyring 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1540124 ']' 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1540124 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1540124 ']' 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1540124 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1540124 00:10:01.729 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.730 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.730 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1540124' 00:10:01.730 killing process with pid 1540124 00:10:01.730 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1540124 00:10:01.730 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1540124 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.988 20:37:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.888 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.888 00:10:03.888 real 0m6.597s 00:10:03.888 user 0m9.555s 00:10:03.888 sys 0m2.170s 00:10:03.888 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.888 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:03.888 ************************************ 00:10:03.888 END TEST nvmf_referrals 00:10:03.888 ************************************ 00:10:03.888 20:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:03.888 20:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.888 20:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.888 20:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.147 ************************************ 00:10:04.147 START TEST nvmf_connect_disconnect 00:10:04.147 ************************************ 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:04.147 * Looking for test storage... 00:10:04.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.147 20:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:06.046 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:06.047 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:06.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:06.047 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:06.047 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.047 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:10:06.305 00:10:06.305 --- 10.0.0.2 ping statistics --- 00:10:06.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.305 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:10:06.305 00:10:06.305 --- 10.0.0.1 ping statistics --- 00:10:06.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.305 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.305 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1542418 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1542418 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1542418 ']' 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.306 20:38:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.306 [2024-07-24 20:38:01.754346] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:10:06.306 [2024-07-24 20:38:01.754442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.306 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.306 [2024-07-24 20:38:01.819868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.563 [2024-07-24 20:38:01.932374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.563 [2024-07-24 20:38:01.932434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.563 [2024-07-24 20:38:01.932449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.563 [2024-07-24 20:38:01.932462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.563 [2024-07-24 20:38:01.932472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.563 [2024-07-24 20:38:01.932526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.563 [2024-07-24 20:38:01.932554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.563 [2024-07-24 20:38:01.932612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.563 [2024-07-24 20:38:01.932614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.563 [2024-07-24 20:38:02.091809] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.563 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.820 [2024-07-24 20:38:02.142614] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:06.820 20:38:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:09.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.237 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.237 rmmod nvme_tcp 00:10:20.237 rmmod nvme_fabrics 00:10:20.237 rmmod nvme_keyring 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1542418 ']' 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1542418 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1542418 ']' 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1542418 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:10:20.495 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.496 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1542418 00:10:20.496 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.496 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.496 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1542418' 00:10:20.496 killing process with pid 1542418 00:10:20.496 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1542418 00:10:20.496 20:38:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1542418 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.754 20:38:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.655 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:22.655 00:10:22.655 real 0m18.706s 00:10:22.655 user 0m55.988s 00:10:22.655 sys 0m3.371s 00:10:22.655 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.655 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:22.655 ************************************ 00:10:22.655 END TEST nvmf_connect_disconnect 00:10:22.655 ************************************ 00:10:22.655 20:38:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:22.655 20:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.655 20:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.655 20:38:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:22.914 ************************************ 00:10:22.914 START TEST nvmf_multitarget 00:10:22.914 ************************************ 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:22.914 * Looking for test storage... 00:10:22.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:22.914 20:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:24.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:24.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:24.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:24.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.816 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.817 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:25.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:25.075 00:10:25.075 --- 10.0.0.2 ping statistics --- 00:10:25.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.075 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:10:25.075 00:10:25.075 --- 10.0.0.1 ping statistics --- 00:10:25.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.075 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1546052 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1546052 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1546052 ']' 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.075 20:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:25.075 [2024-07-24 20:38:20.467173] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:10:25.075 [2024-07-24 20:38:20.467261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.075 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.075 [2024-07-24 20:38:20.535041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.333 [2024-07-24 20:38:20.656359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.333 [2024-07-24 20:38:20.656411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.333 [2024-07-24 20:38:20.656427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.333 [2024-07-24 20:38:20.656440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.333 [2024-07-24 20:38:20.656452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.333 [2024-07-24 20:38:20.656547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.333 [2024-07-24 20:38:20.656604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.333 [2024-07-24 20:38:20.656654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.333 [2024-07-24 20:38:20.656657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:25.898 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:26.155 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:26.155 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:26.155 "nvmf_tgt_1" 00:10:26.155 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:26.412 "nvmf_tgt_2" 00:10:26.412 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:26.412 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:26.412 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:26.412 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:26.412 true 00:10:26.412 20:38:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:26.670 true 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:26.670 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:26.670 rmmod nvme_tcp 00:10:26.670 rmmod nvme_fabrics 00:10:26.928 rmmod nvme_keyring 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1546052 ']' 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1546052 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1546052 ']' 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1546052 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1546052 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1546052' 00:10:26.928 killing process with pid 1546052 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1546052 00:10:26.928 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1546052 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.187 20:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.088 00:10:29.088 real 0m6.368s 00:10:29.088 user 0m9.075s 00:10:29.088 sys 0m1.958s 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:29.088 ************************************ 00:10:29.088 END TEST nvmf_multitarget 00:10:29.088 ************************************ 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:29.088 ************************************ 00:10:29.088 START TEST nvmf_rpc 00:10:29.088 ************************************ 00:10:29.088 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:29.346 * Looking for test storage... 00:10:29.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.346 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.347 20:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:31.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:31.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.282 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:31.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:31.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:31.283 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:31.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:10:31.541 00:10:31.541 --- 10.0.0.2 ping statistics --- 00:10:31.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.541 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:31.541 00:10:31.541 --- 10.0.0.1 ping statistics --- 00:10:31.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.541 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1548283 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1548283 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1548283 ']' 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.541 20:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.541 [2024-07-24 20:38:26.977950] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:10:31.541 [2024-07-24 20:38:26.978044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.541 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.541 [2024-07-24 20:38:27.043623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.800 [2024-07-24 20:38:27.156963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.800 [2024-07-24 20:38:27.157032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.800 [2024-07-24 20:38:27.157045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.800 [2024-07-24 20:38:27.157058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.800 [2024-07-24 20:38:27.157082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.800 [2024-07-24 20:38:27.157170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.800 [2024-07-24 20:38:27.157237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.800 [2024-07-24 20:38:27.157301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.800 [2024-07-24 20:38:27.157305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:31.800 "tick_rate": 2700000000, 00:10:31.800 "poll_groups": [ 00:10:31.800 { 00:10:31.800 "name": "nvmf_tgt_poll_group_000", 00:10:31.800 "admin_qpairs": 0, 00:10:31.800 "io_qpairs": 0, 00:10:31.800 "current_admin_qpairs": 0, 00:10:31.800 "current_io_qpairs": 0, 00:10:31.800 "pending_bdev_io": 0, 00:10:31.800 "completed_nvme_io": 0, 00:10:31.800 "transports": [] 00:10:31.800 }, 00:10:31.800 { 00:10:31.800 "name": "nvmf_tgt_poll_group_001", 00:10:31.800 "admin_qpairs": 0, 00:10:31.800 "io_qpairs": 0, 00:10:31.800 "current_admin_qpairs": 0, 00:10:31.800 "current_io_qpairs": 0, 00:10:31.800 "pending_bdev_io": 0, 00:10:31.800 "completed_nvme_io": 0, 00:10:31.800 "transports": [] 00:10:31.800 }, 00:10:31.800 { 00:10:31.800 "name": "nvmf_tgt_poll_group_002", 00:10:31.800 "admin_qpairs": 0, 00:10:31.800 "io_qpairs": 0, 00:10:31.800 "current_admin_qpairs": 0, 00:10:31.800 "current_io_qpairs": 0, 00:10:31.800 "pending_bdev_io": 0, 00:10:31.800 "completed_nvme_io": 0, 00:10:31.800 "transports": [] 00:10:31.800 }, 00:10:31.800 { 00:10:31.800 "name": "nvmf_tgt_poll_group_003", 00:10:31.800 "admin_qpairs": 0, 00:10:31.800 "io_qpairs": 0, 00:10:31.800 "current_admin_qpairs": 0, 00:10:31.800 "current_io_qpairs": 0, 00:10:31.800 "pending_bdev_io": 0, 00:10:31.800 "completed_nvme_io": 0, 00:10:31.800 "transports": [] 00:10:31.800 } 00:10:31.800 ] 00:10:31.800 }' 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:31.800 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.058 [2024-07-24 20:38:27.412040] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:32.058 "tick_rate": 2700000000, 00:10:32.058 "poll_groups": [ 00:10:32.058 { 00:10:32.058 "name": "nvmf_tgt_poll_group_000", 00:10:32.058 "admin_qpairs": 0, 00:10:32.058 "io_qpairs": 0, 00:10:32.058 "current_admin_qpairs": 0, 00:10:32.058 "current_io_qpairs": 0, 00:10:32.058 "pending_bdev_io": 0, 00:10:32.058 "completed_nvme_io": 0, 00:10:32.058 "transports": [ 00:10:32.058 { 00:10:32.058 "trtype": "TCP" 00:10:32.058 } 00:10:32.058 ] 00:10:32.058 }, 00:10:32.058 { 00:10:32.058 "name": "nvmf_tgt_poll_group_001", 00:10:32.058 "admin_qpairs": 0, 00:10:32.058 "io_qpairs": 0, 00:10:32.058 "current_admin_qpairs": 0, 00:10:32.058 "current_io_qpairs": 0, 00:10:32.058 "pending_bdev_io": 0, 00:10:32.058 "completed_nvme_io": 0, 00:10:32.058 "transports": [ 00:10:32.058 { 00:10:32.058 "trtype": "TCP" 00:10:32.058 } 00:10:32.058 ] 00:10:32.058 }, 00:10:32.058 { 00:10:32.058 "name": "nvmf_tgt_poll_group_002", 00:10:32.058 "admin_qpairs": 0, 00:10:32.058 "io_qpairs": 0, 00:10:32.058 "current_admin_qpairs": 0, 00:10:32.058 "current_io_qpairs": 0, 00:10:32.058 "pending_bdev_io": 0, 00:10:32.058 "completed_nvme_io": 0, 00:10:32.058 "transports": [ 00:10:32.058 { 00:10:32.058 "trtype": "TCP" 00:10:32.058 } 00:10:32.058 ] 00:10:32.058 }, 00:10:32.058 { 00:10:32.058 "name": "nvmf_tgt_poll_group_003", 00:10:32.058 "admin_qpairs": 0, 00:10:32.058 "io_qpairs": 0, 00:10:32.058 "current_admin_qpairs": 0, 00:10:32.058 "current_io_qpairs": 0, 00:10:32.058 "pending_bdev_io": 0, 00:10:32.058 "completed_nvme_io": 0, 00:10:32.058 "transports": [ 00:10:32.058 { 00:10:32.058 "trtype": "TCP" 00:10:32.058 } 00:10:32.058 ] 00:10:32.058 } 00:10:32.058 ] 00:10:32.058 }' 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:32.058 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.059 Malloc1 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.059 [2024-07-24 20:38:27.577579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:32.059 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:32.059 [2024-07-24 20:38:27.600116] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:32.316 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:32.316 could not add new controller: failed to write to nvme-fabrics device 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.317 20:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.882 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.882 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:32.882 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.882 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:32.882 20:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:34.778 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:34.778 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:34.778 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.778 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:34.778 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.778 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:34.778 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.036 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.036 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:35.036 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:35.036 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.036 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:35.036 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.036 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.037 [2024-07-24 20:38:30.483934] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:35.037 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:35.037 could not add new controller: failed to write to nvme-fabrics device 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.037 20:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.969 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:35.969 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:35.969 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:35.969 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:35.969 20:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.866 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.867 [2024-07-24 20:38:33.372101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.867 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.868 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.868 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.868 20:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.800 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.800 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:38.800 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.800 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:38.800 20:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.696 [2024-07-24 20:38:36.187953] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.696 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.260 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.260 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.260 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.260 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:41.260 20:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.781 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.781 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 [2024-07-24 20:38:38.934795] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.782 20:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.040 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.040 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:44.040 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.040 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:44.040 20:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:46.565 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.566 [2024-07-24 20:38:41.678997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.566 20:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.824 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.824 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.824 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.824 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:46.824 20:38:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 [2024-07-24 20:38:44.485916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.348 20:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.913 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.913 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.913 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.913 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:49.913 20:38:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.869 [2024-07-24 20:38:47.354048] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.869 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 [2024-07-24 20:38:47.402121] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.870 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 [2024-07-24 20:38:47.450319] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 [2024-07-24 20:38:47.498484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 [2024-07-24 20:38:47.546665] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.128 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:52.129 "tick_rate": 2700000000, 00:10:52.129 "poll_groups": [ 00:10:52.129 { 00:10:52.129 "name": "nvmf_tgt_poll_group_000", 00:10:52.129 "admin_qpairs": 2, 00:10:52.129 "io_qpairs": 84, 00:10:52.129 "current_admin_qpairs": 0, 00:10:52.129 "current_io_qpairs": 0, 00:10:52.129 "pending_bdev_io": 0, 00:10:52.129 "completed_nvme_io": 137, 00:10:52.129 "transports": [ 00:10:52.129 { 00:10:52.129 "trtype": "TCP" 00:10:52.129 } 00:10:52.129 ] 00:10:52.129 }, 00:10:52.129 { 00:10:52.129 "name": "nvmf_tgt_poll_group_001", 00:10:52.129 "admin_qpairs": 2, 00:10:52.129 "io_qpairs": 84, 00:10:52.129 "current_admin_qpairs": 0, 00:10:52.129 "current_io_qpairs": 0, 00:10:52.129 "pending_bdev_io": 0, 00:10:52.129 "completed_nvme_io": 183, 00:10:52.129 "transports": [ 00:10:52.129 { 00:10:52.129 "trtype": "TCP" 00:10:52.129 } 00:10:52.129 ] 00:10:52.129 }, 00:10:52.129 { 00:10:52.129 "name": "nvmf_tgt_poll_group_002", 00:10:52.129 "admin_qpairs": 1, 00:10:52.129 "io_qpairs": 84, 00:10:52.129 "current_admin_qpairs": 0, 00:10:52.129 "current_io_qpairs": 0, 00:10:52.129 "pending_bdev_io": 0, 00:10:52.129 "completed_nvme_io": 183, 00:10:52.129 "transports": [ 00:10:52.129 { 00:10:52.129 "trtype": "TCP" 00:10:52.129 } 00:10:52.129 ] 00:10:52.129 }, 00:10:52.129 { 00:10:52.129 "name": "nvmf_tgt_poll_group_003", 00:10:52.129 "admin_qpairs": 2, 00:10:52.129 "io_qpairs": 84, 00:10:52.129 "current_admin_qpairs": 0, 00:10:52.129 "current_io_qpairs": 0, 00:10:52.129 "pending_bdev_io": 0, 00:10:52.129 "completed_nvme_io": 183, 00:10:52.129 "transports": [ 00:10:52.129 { 00:10:52.129 "trtype": "TCP" 00:10:52.129 } 00:10:52.129 ] 00:10:52.129 } 00:10:52.129 ] 00:10:52.129 }' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.129 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.129 rmmod nvme_tcp 00:10:52.386 rmmod nvme_fabrics 00:10:52.386 rmmod nvme_keyring 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1548283 ']' 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1548283 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1548283 ']' 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1548283 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1548283 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1548283' 00:10:52.386 killing process with pid 1548283 00:10:52.386 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1548283 00:10:52.387 20:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1548283 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.644 20:38:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.174 00:10:55.174 real 0m25.476s 00:10:55.174 user 1m22.754s 00:10:55.174 sys 0m4.059s 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.174 ************************************ 00:10:55.174 END TEST nvmf_rpc 00:10:55.174 ************************************ 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.174 ************************************ 00:10:55.174 START TEST nvmf_invalid 00:10:55.174 ************************************ 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:55.174 * Looking for test storage... 00:10:55.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.174 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.175 20:38:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.078 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.078 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:57.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:10:57.078 00:10:57.078 --- 10.0.0.2 ping statistics --- 00:10:57.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.078 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:10:57.078 00:10:57.078 --- 10.0.0.1 ping statistics --- 00:10:57.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.078 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:57.078 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1552772 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1552772 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1552772 ']' 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.079 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.079 [2024-07-24 20:38:52.584768] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:10:57.079 [2024-07-24 20:38:52.584865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.079 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.336 [2024-07-24 20:38:52.659346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.336 [2024-07-24 20:38:52.766406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.336 [2024-07-24 20:38:52.766458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.336 [2024-07-24 20:38:52.766487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.336 [2024-07-24 20:38:52.766499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.336 [2024-07-24 20:38:52.766508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.336 [2024-07-24 20:38:52.766577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.336 [2024-07-24 20:38:52.766663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.336 [2024-07-24 20:38:52.766713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.336 [2024-07-24 20:38:52.766716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.336 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.336 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:10:57.336 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:57.336 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.336 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.594 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.594 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:57.594 20:38:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1302 00:10:57.594 [2024-07-24 20:38:53.150088] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:57.850 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:57.850 { 00:10:57.850 "nqn": "nqn.2016-06.io.spdk:cnode1302", 00:10:57.850 "tgt_name": "foobar", 00:10:57.850 "method": "nvmf_create_subsystem", 00:10:57.850 "req_id": 1 00:10:57.850 } 00:10:57.850 Got JSON-RPC error response 00:10:57.850 response: 00:10:57.850 { 00:10:57.850 "code": -32603, 00:10:57.850 "message": "Unable to find target foobar" 00:10:57.850 }' 00:10:57.850 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:57.850 { 00:10:57.850 "nqn": "nqn.2016-06.io.spdk:cnode1302", 00:10:57.850 "tgt_name": "foobar", 00:10:57.850 "method": "nvmf_create_subsystem", 00:10:57.850 "req_id": 1 00:10:57.850 } 00:10:57.850 Got JSON-RPC error response 00:10:57.850 response: 00:10:57.850 { 00:10:57.850 "code": -32603, 00:10:57.850 "message": "Unable to find target foobar" 00:10:57.850 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:57.850 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:57.850 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28130 00:10:57.850 [2024-07-24 20:38:53.398978] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28130: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:58.107 { 00:10:58.107 "nqn": "nqn.2016-06.io.spdk:cnode28130", 00:10:58.107 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:58.107 "method": "nvmf_create_subsystem", 00:10:58.107 "req_id": 1 00:10:58.107 } 00:10:58.107 Got JSON-RPC error response 00:10:58.107 response: 00:10:58.107 { 00:10:58.107 "code": -32602, 00:10:58.107 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:58.107 }' 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:58.107 { 00:10:58.107 "nqn": "nqn.2016-06.io.spdk:cnode28130", 00:10:58.107 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:58.107 "method": "nvmf_create_subsystem", 00:10:58.107 "req_id": 1 00:10:58.107 } 00:10:58.107 Got JSON-RPC error response 00:10:58.107 response: 00:10:58.107 { 00:10:58.107 "code": -32602, 00:10:58.107 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:58.107 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9869 00:10:58.107 [2024-07-24 20:38:53.643747] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9869: invalid model number 'SPDK_Controller' 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:58.107 { 00:10:58.107 "nqn": "nqn.2016-06.io.spdk:cnode9869", 00:10:58.107 "model_number": "SPDK_Controller\u001f", 00:10:58.107 "method": "nvmf_create_subsystem", 00:10:58.107 "req_id": 1 00:10:58.107 } 00:10:58.107 Got JSON-RPC error response 00:10:58.107 response: 00:10:58.107 { 00:10:58.107 "code": -32602, 00:10:58.107 "message": "Invalid MN SPDK_Controller\u001f" 00:10:58.107 }' 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:58.107 { 00:10:58.107 "nqn": "nqn.2016-06.io.spdk:cnode9869", 00:10:58.107 "model_number": "SPDK_Controller\u001f", 00:10:58.107 "method": "nvmf_create_subsystem", 00:10:58.107 "req_id": 1 00:10:58.107 } 00:10:58.107 Got JSON-RPC error response 00:10:58.107 response: 00:10:58.107 { 00:10:58.107 "code": -32602, 00:10:58.107 "message": "Invalid MN SPDK_Controller\u001f" 00:10:58.107 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.107 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:58.108 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:58.108 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:58.108 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.108 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.365 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '4'\''|[ii,O>k GVzgDhv2&P' 00:10:58.366 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '4'\''|[ii,O>k GVzgDhv2&P' nqn.2016-06.io.spdk:cnode28596 00:10:58.625 [2024-07-24 20:38:53.976880] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28596: invalid serial number '4'|[ii,O>k GVzgDhv2&P' 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:58.625 { 00:10:58.625 "nqn": "nqn.2016-06.io.spdk:cnode28596", 00:10:58.625 "serial_number": "4'\''|[ii,O>k GVzgDhv2&P", 00:10:58.625 "method": "nvmf_create_subsystem", 00:10:58.625 "req_id": 1 00:10:58.625 } 00:10:58.625 Got JSON-RPC error response 00:10:58.625 response: 00:10:58.625 { 00:10:58.625 "code": -32602, 00:10:58.625 "message": "Invalid SN 4'\''|[ii,O>k GVzgDhv2&P" 00:10:58.625 }' 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:58.625 { 00:10:58.625 "nqn": "nqn.2016-06.io.spdk:cnode28596", 00:10:58.625 "serial_number": "4'|[ii,O>k GVzgDhv2&P", 00:10:58.625 "method": "nvmf_create_subsystem", 00:10:58.625 "req_id": 1 00:10:58.625 } 00:10:58.625 Got JSON-RPC error response 00:10:58.625 response: 00:10:58.625 { 00:10:58.625 "code": -32602, 00:10:58.625 "message": "Invalid SN 4'|[ii,O>k GVzgDhv2&P" 00:10:58.625 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:58.625 20:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:58.625 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.626 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'c$"KT.SpI=Fd^kRgP0^\N_*p[t}'\''WGVp.(7)nyb3o' 00:10:58.627 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'c$"KT.SpI=Fd^kRgP0^\N_*p[t}'\''WGVp.(7)nyb3o' nqn.2016-06.io.spdk:cnode9301 00:10:58.885 [2024-07-24 20:38:54.358076] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9301: invalid model number 'c$"KT.SpI=Fd^kRgP0^\N_*p[t}'WGVp.(7)nyb3o' 00:10:58.885 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:58.885 { 00:10:58.885 "nqn": "nqn.2016-06.io.spdk:cnode9301", 00:10:58.885 "model_number": "c$\"KT.SpI=Fd^kRgP0^\\N_*p[t}'\''WGVp.(7)nyb3o", 00:10:58.885 "method": "nvmf_create_subsystem", 00:10:58.885 "req_id": 1 00:10:58.885 } 00:10:58.885 Got JSON-RPC error response 00:10:58.885 response: 00:10:58.885 { 00:10:58.885 "code": -32602, 00:10:58.885 "message": "Invalid MN c$\"KT.SpI=Fd^kRgP0^\\N_*p[t}'\''WGVp.(7)nyb3o" 00:10:58.885 }' 00:10:58.885 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:58.885 { 00:10:58.885 "nqn": "nqn.2016-06.io.spdk:cnode9301", 00:10:58.885 "model_number": "c$\"KT.SpI=Fd^kRgP0^\\N_*p[t}'WGVp.(7)nyb3o", 00:10:58.885 "method": "nvmf_create_subsystem", 00:10:58.885 "req_id": 1 00:10:58.885 } 00:10:58.885 Got JSON-RPC error response 00:10:58.885 response: 00:10:58.885 { 00:10:58.885 "code": -32602, 00:10:58.885 "message": "Invalid MN c$\"KT.SpI=Fd^kRgP0^\\N_*p[t}'WGVp.(7)nyb3o" 00:10:58.885 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:58.885 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:59.142 [2024-07-24 20:38:54.598956] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.142 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:59.401 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:59.401 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:59.401 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:59.401 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:59.401 20:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:59.659 [2024-07-24 20:38:55.088552] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:59.659 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:59.659 { 00:10:59.659 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:59.659 "listen_address": { 00:10:59.659 "trtype": "tcp", 00:10:59.659 "traddr": "", 00:10:59.659 "trsvcid": "4421" 00:10:59.659 }, 00:10:59.659 "method": "nvmf_subsystem_remove_listener", 00:10:59.659 "req_id": 1 00:10:59.659 } 00:10:59.659 Got JSON-RPC error response 00:10:59.659 response: 00:10:59.659 { 00:10:59.659 "code": -32602, 00:10:59.659 "message": "Invalid parameters" 00:10:59.659 }' 00:10:59.659 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:59.659 { 00:10:59.659 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:59.659 "listen_address": { 00:10:59.659 "trtype": "tcp", 00:10:59.659 "traddr": "", 00:10:59.659 "trsvcid": "4421" 00:10:59.659 }, 00:10:59.659 "method": "nvmf_subsystem_remove_listener", 00:10:59.659 "req_id": 1 00:10:59.659 } 00:10:59.659 Got JSON-RPC error response 00:10:59.659 response: 00:10:59.659 { 00:10:59.659 "code": -32602, 00:10:59.659 "message": "Invalid parameters" 00:10:59.659 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:59.659 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3558 -i 0 00:10:59.915 [2024-07-24 20:38:55.337315] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3558: invalid cntlid range [0-65519] 00:10:59.916 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:59.916 { 00:10:59.916 "nqn": "nqn.2016-06.io.spdk:cnode3558", 00:10:59.916 "min_cntlid": 0, 00:10:59.916 "method": "nvmf_create_subsystem", 00:10:59.916 "req_id": 1 00:10:59.916 } 00:10:59.916 Got JSON-RPC error response 00:10:59.916 response: 00:10:59.916 { 00:10:59.916 "code": -32602, 00:10:59.916 "message": "Invalid cntlid range [0-65519]" 00:10:59.916 }' 00:10:59.916 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:59.916 { 00:10:59.916 "nqn": "nqn.2016-06.io.spdk:cnode3558", 00:10:59.916 "min_cntlid": 0, 00:10:59.916 "method": "nvmf_create_subsystem", 00:10:59.916 "req_id": 1 00:10:59.916 } 00:10:59.916 Got JSON-RPC error response 00:10:59.916 response: 00:10:59.916 { 00:10:59.916 "code": -32602, 00:10:59.916 "message": "Invalid cntlid range [0-65519]" 00:10:59.916 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:59.916 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3290 -i 65520 00:11:00.173 [2024-07-24 20:38:55.590119] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3290: invalid cntlid range [65520-65519] 00:11:00.173 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:00.173 { 00:11:00.173 "nqn": "nqn.2016-06.io.spdk:cnode3290", 00:11:00.173 "min_cntlid": 65520, 00:11:00.173 "method": "nvmf_create_subsystem", 00:11:00.173 "req_id": 1 00:11:00.173 } 00:11:00.173 Got JSON-RPC error response 00:11:00.173 response: 00:11:00.173 { 00:11:00.173 "code": -32602, 00:11:00.173 "message": "Invalid cntlid range [65520-65519]" 00:11:00.173 }' 00:11:00.173 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:00.173 { 00:11:00.173 "nqn": "nqn.2016-06.io.spdk:cnode3290", 00:11:00.173 "min_cntlid": 65520, 00:11:00.173 "method": "nvmf_create_subsystem", 00:11:00.173 "req_id": 1 00:11:00.173 } 00:11:00.173 Got JSON-RPC error response 00:11:00.173 response: 00:11:00.173 { 00:11:00.173 "code": -32602, 00:11:00.173 "message": "Invalid cntlid range [65520-65519]" 00:11:00.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:00.173 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30734 -I 0 00:11:00.431 [2024-07-24 20:38:55.830945] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30734: invalid cntlid range [1-0] 00:11:00.431 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:00.431 { 00:11:00.431 "nqn": "nqn.2016-06.io.spdk:cnode30734", 00:11:00.431 "max_cntlid": 0, 00:11:00.431 "method": "nvmf_create_subsystem", 00:11:00.431 "req_id": 1 00:11:00.431 } 00:11:00.431 Got JSON-RPC error response 00:11:00.431 response: 00:11:00.431 { 00:11:00.431 "code": -32602, 00:11:00.431 "message": "Invalid cntlid range [1-0]" 00:11:00.431 }' 00:11:00.431 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:00.431 { 00:11:00.431 "nqn": "nqn.2016-06.io.spdk:cnode30734", 00:11:00.431 "max_cntlid": 0, 00:11:00.431 "method": "nvmf_create_subsystem", 00:11:00.431 "req_id": 1 00:11:00.431 } 00:11:00.431 Got JSON-RPC error response 00:11:00.431 response: 00:11:00.431 { 00:11:00.431 "code": -32602, 00:11:00.431 "message": "Invalid cntlid range [1-0]" 00:11:00.431 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:00.431 20:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5222 -I 65520 00:11:00.688 [2024-07-24 20:38:56.079728] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5222: invalid cntlid range [1-65520] 00:11:00.688 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:00.688 { 00:11:00.688 "nqn": "nqn.2016-06.io.spdk:cnode5222", 00:11:00.688 "max_cntlid": 65520, 00:11:00.688 "method": "nvmf_create_subsystem", 00:11:00.688 "req_id": 1 00:11:00.688 } 00:11:00.688 Got JSON-RPC error response 00:11:00.688 response: 00:11:00.688 { 00:11:00.688 "code": -32602, 00:11:00.688 "message": "Invalid cntlid range [1-65520]" 00:11:00.688 }' 00:11:00.688 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:00.688 { 00:11:00.688 "nqn": "nqn.2016-06.io.spdk:cnode5222", 00:11:00.688 "max_cntlid": 65520, 00:11:00.688 "method": "nvmf_create_subsystem", 00:11:00.688 "req_id": 1 00:11:00.688 } 00:11:00.688 Got JSON-RPC error response 00:11:00.688 response: 00:11:00.688 { 00:11:00.688 "code": -32602, 00:11:00.688 "message": "Invalid cntlid range [1-65520]" 00:11:00.688 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:00.689 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9745 -i 6 -I 5 00:11:00.946 [2024-07-24 20:38:56.344648] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9745: invalid cntlid range [6-5] 00:11:00.946 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:00.946 { 00:11:00.946 "nqn": "nqn.2016-06.io.spdk:cnode9745", 00:11:00.946 "min_cntlid": 6, 00:11:00.946 "max_cntlid": 5, 00:11:00.946 "method": "nvmf_create_subsystem", 00:11:00.946 "req_id": 1 00:11:00.946 } 00:11:00.946 Got JSON-RPC error response 00:11:00.946 response: 00:11:00.946 { 00:11:00.946 "code": -32602, 00:11:00.946 "message": "Invalid cntlid range [6-5]" 00:11:00.946 }' 00:11:00.946 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:00.946 { 00:11:00.946 "nqn": "nqn.2016-06.io.spdk:cnode9745", 00:11:00.946 "min_cntlid": 6, 00:11:00.946 "max_cntlid": 5, 00:11:00.946 "method": "nvmf_create_subsystem", 00:11:00.946 "req_id": 1 00:11:00.946 } 00:11:00.946 Got JSON-RPC error response 00:11:00.946 response: 00:11:00.946 { 00:11:00.946 "code": -32602, 00:11:00.946 "message": "Invalid cntlid range [6-5]" 00:11:00.946 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:00.946 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:00.946 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:00.946 { 00:11:00.946 "name": "foobar", 00:11:00.946 "method": "nvmf_delete_target", 00:11:00.946 "req_id": 1 00:11:00.946 } 00:11:00.946 Got JSON-RPC error response 00:11:00.946 response: 00:11:00.946 { 00:11:00.946 "code": -32602, 00:11:00.946 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:00.946 }' 00:11:00.946 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:00.946 { 00:11:00.946 "name": "foobar", 00:11:00.947 "method": "nvmf_delete_target", 00:11:00.947 "req_id": 1 00:11:00.947 } 00:11:00.947 Got JSON-RPC error response 00:11:00.947 response: 00:11:00.947 { 00:11:00.947 "code": -32602, 00:11:00.947 "message": "The specified target doesn't exist, cannot delete it." 00:11:00.947 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.947 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.947 rmmod nvme_tcp 00:11:00.947 rmmod nvme_fabrics 00:11:01.205 rmmod nvme_keyring 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1552772 ']' 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1552772 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1552772 ']' 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1552772 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552772 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552772' 00:11:01.205 killing process with pid 1552772 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1552772 00:11:01.205 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1552772 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.463 20:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.365 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.365 00:11:03.365 real 0m8.717s 00:11:03.365 user 0m19.748s 00:11:03.366 sys 0m2.501s 00:11:03.366 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.366 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:03.366 ************************************ 00:11:03.366 END TEST nvmf_invalid 00:11:03.366 ************************************ 00:11:03.366 20:38:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:03.366 20:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.366 20:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.366 20:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:03.624 ************************************ 00:11:03.624 START TEST nvmf_connect_stress 00:11:03.624 ************************************ 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:03.624 * Looking for test storage... 00:11:03.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.624 20:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.624 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.624 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.624 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.624 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.624 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.624 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.624 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:03.625 20:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:05.527 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:05.527 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:05.528 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:05.528 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:05.528 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.528 20:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:11:05.528 00:11:05.528 --- 10.0.0.2 ping statistics --- 00:11:05.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.528 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:11:05.528 00:11:05.528 --- 10.0.0.1 ping statistics --- 00:11:05.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.528 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.528 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1555474 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1555474 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1555474 ']' 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.786 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.786 [2024-07-24 20:39:01.153856] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:11:05.786 [2024-07-24 20:39:01.153957] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.786 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.786 [2024-07-24 20:39:01.218601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.786 [2024-07-24 20:39:01.328094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.786 [2024-07-24 20:39:01.328146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.786 [2024-07-24 20:39:01.328169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.786 [2024-07-24 20:39:01.328179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.786 [2024-07-24 20:39:01.328189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.786 [2024-07-24 20:39:01.328270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.786 [2024-07-24 20:39:01.328345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.786 [2024-07-24 20:39:01.328348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.044 [2024-07-24 20:39:01.466687] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.044 [2024-07-24 20:39:01.494329] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.044 NULL1 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1555521 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.044 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.045 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.608 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.608 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:06.608 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.608 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.608 20:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.865 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.866 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:06.866 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.866 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.866 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.122 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.122 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:07.122 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.122 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.122 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.380 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.380 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:07.380 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.380 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.380 20:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.637 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.637 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:07.637 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.637 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.637 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.202 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.202 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:08.202 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.202 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.202 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.501 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.501 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:08.501 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.501 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.501 20:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.781 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.781 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:08.781 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.781 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.781 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.038 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.038 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:09.038 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.038 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.038 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.295 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.295 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:09.295 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.295 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.295 20:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.553 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.553 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:09.553 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.553 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.553 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.118 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.118 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:10.118 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.118 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.118 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.375 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.375 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:10.375 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.375 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.375 20:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.633 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.633 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:10.633 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.633 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.633 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.890 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.890 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:10.890 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.890 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.890 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.148 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.148 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:11.148 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.148 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.148 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.713 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.713 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:11.713 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.713 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.713 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.971 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.971 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:11.971 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.971 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.971 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.228 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.228 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:12.228 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.228 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.228 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.485 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.485 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:12.485 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.485 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.485 20:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.742 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.742 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:12.742 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.742 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.742 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.307 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.307 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:13.307 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.307 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.307 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.564 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.564 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:13.564 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.564 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.564 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.822 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.822 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:13.822 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.822 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.822 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.079 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.079 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:14.079 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.079 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.079 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.336 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.336 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:14.336 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.336 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.336 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.901 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.901 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:14.901 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.901 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.901 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:15.159 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.159 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.416 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.416 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:15.416 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.416 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.416 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.673 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.673 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:15.673 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.673 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.673 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.237 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.237 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:16.237 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.237 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.237 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.237 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1555521 00:11:16.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1555521) - No such process 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1555521 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.494 rmmod nvme_tcp 00:11:16.494 rmmod nvme_fabrics 00:11:16.494 rmmod nvme_keyring 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1555474 ']' 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1555474 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1555474 ']' 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1555474 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1555474 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1555474' 00:11:16.494 killing process with pid 1555474 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1555474 00:11:16.494 20:39:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1555474 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.751 20:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.281 00:11:19.281 real 0m15.297s 00:11:19.281 user 0m38.103s 00:11:19.281 sys 0m6.111s 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.281 ************************************ 00:11:19.281 END TEST nvmf_connect_stress 00:11:19.281 ************************************ 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.281 ************************************ 00:11:19.281 START TEST nvmf_fused_ordering 00:11:19.281 ************************************ 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:19.281 * Looking for test storage... 00:11:19.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.281 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.282 20:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:21.179 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.180 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:11:21.180 00:11:21.180 --- 10.0.0.2 ping statistics --- 00:11:21.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.181 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:21.181 00:11:21.181 --- 10.0.0.1 ping statistics --- 00:11:21.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.181 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1559199 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1559199 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1559199 ']' 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.181 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.181 [2024-07-24 20:39:16.551004] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:11:21.181 [2024-07-24 20:39:16.551077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.181 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.181 [2024-07-24 20:39:16.614783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.181 [2024-07-24 20:39:16.721472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.181 [2024-07-24 20:39:16.721529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.181 [2024-07-24 20:39:16.721557] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.181 [2024-07-24 20:39:16.721569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.181 [2024-07-24 20:39:16.721579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.181 [2024-07-24 20:39:16.721605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 [2024-07-24 20:39:16.862826] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 [2024-07-24 20:39:16.879057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 NULL1 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.439 20:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:21.439 [2024-07-24 20:39:16.924525] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:11:21.439 [2024-07-24 20:39:16.924568] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559337 ] 00:11:21.439 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.005 Attached to nqn.2016-06.io.spdk:cnode1 00:11:22.005 Namespace ID: 1 size: 1GB 00:11:22.005 fused_ordering(0) 00:11:22.005 fused_ordering(1) 00:11:22.005 fused_ordering(2) 00:11:22.005 fused_ordering(3) 00:11:22.005 fused_ordering(4) 00:11:22.005 fused_ordering(5) 00:11:22.005 fused_ordering(6) 00:11:22.005 fused_ordering(7) 00:11:22.005 fused_ordering(8) 00:11:22.005 fused_ordering(9) 00:11:22.005 fused_ordering(10) 00:11:22.005 fused_ordering(11) 00:11:22.005 fused_ordering(12) 00:11:22.005 fused_ordering(13) 00:11:22.005 fused_ordering(14) 00:11:22.005 fused_ordering(15) 00:11:22.005 fused_ordering(16) 00:11:22.005 fused_ordering(17) 00:11:22.005 fused_ordering(18) 00:11:22.005 fused_ordering(19) 00:11:22.005 fused_ordering(20) 00:11:22.005 fused_ordering(21) 00:11:22.005 fused_ordering(22) 00:11:22.005 fused_ordering(23) 00:11:22.005 fused_ordering(24) 00:11:22.005 fused_ordering(25) 00:11:22.005 fused_ordering(26) 00:11:22.005 fused_ordering(27) 00:11:22.005 fused_ordering(28) 00:11:22.005 fused_ordering(29) 00:11:22.005 fused_ordering(30) 00:11:22.005 fused_ordering(31) 00:11:22.005 fused_ordering(32) 00:11:22.005 fused_ordering(33) 00:11:22.005 fused_ordering(34) 00:11:22.005 fused_ordering(35) 00:11:22.005 fused_ordering(36) 00:11:22.005 fused_ordering(37) 00:11:22.005 fused_ordering(38) 00:11:22.005 fused_ordering(39) 00:11:22.005 fused_ordering(40) 00:11:22.005 fused_ordering(41) 00:11:22.005 fused_ordering(42) 00:11:22.005 fused_ordering(43) 00:11:22.005 fused_ordering(44) 00:11:22.005 fused_ordering(45) 00:11:22.005 fused_ordering(46) 00:11:22.005 fused_ordering(47) 00:11:22.005 fused_ordering(48) 00:11:22.005 fused_ordering(49) 00:11:22.005 fused_ordering(50) 00:11:22.005 fused_ordering(51) 00:11:22.005 fused_ordering(52) 00:11:22.005 fused_ordering(53) 00:11:22.005 fused_ordering(54) 00:11:22.005 fused_ordering(55) 00:11:22.005 fused_ordering(56) 00:11:22.005 fused_ordering(57) 00:11:22.005 fused_ordering(58) 00:11:22.005 fused_ordering(59) 00:11:22.005 fused_ordering(60) 00:11:22.005 fused_ordering(61) 00:11:22.005 fused_ordering(62) 00:11:22.005 fused_ordering(63) 00:11:22.005 fused_ordering(64) 00:11:22.005 fused_ordering(65) 00:11:22.005 fused_ordering(66) 00:11:22.005 fused_ordering(67) 00:11:22.005 fused_ordering(68) 00:11:22.005 fused_ordering(69) 00:11:22.005 fused_ordering(70) 00:11:22.005 fused_ordering(71) 00:11:22.005 fused_ordering(72) 00:11:22.005 fused_ordering(73) 00:11:22.005 fused_ordering(74) 00:11:22.005 fused_ordering(75) 00:11:22.005 fused_ordering(76) 00:11:22.005 fused_ordering(77) 00:11:22.005 fused_ordering(78) 00:11:22.005 fused_ordering(79) 00:11:22.005 fused_ordering(80) 00:11:22.005 fused_ordering(81) 00:11:22.005 fused_ordering(82) 00:11:22.005 fused_ordering(83) 00:11:22.005 fused_ordering(84) 00:11:22.005 fused_ordering(85) 00:11:22.005 fused_ordering(86) 00:11:22.005 fused_ordering(87) 00:11:22.005 fused_ordering(88) 00:11:22.005 fused_ordering(89) 00:11:22.005 fused_ordering(90) 00:11:22.005 fused_ordering(91) 00:11:22.005 fused_ordering(92) 00:11:22.005 fused_ordering(93) 00:11:22.005 fused_ordering(94) 00:11:22.005 fused_ordering(95) 00:11:22.005 fused_ordering(96) 00:11:22.005 fused_ordering(97) 00:11:22.005 fused_ordering(98) 00:11:22.005 fused_ordering(99) 00:11:22.005 fused_ordering(100) 00:11:22.005 fused_ordering(101) 00:11:22.005 fused_ordering(102) 00:11:22.005 fused_ordering(103) 00:11:22.005 fused_ordering(104) 00:11:22.005 fused_ordering(105) 00:11:22.005 fused_ordering(106) 00:11:22.005 fused_ordering(107) 00:11:22.005 fused_ordering(108) 00:11:22.005 fused_ordering(109) 00:11:22.005 fused_ordering(110) 00:11:22.005 fused_ordering(111) 00:11:22.005 fused_ordering(112) 00:11:22.005 fused_ordering(113) 00:11:22.005 fused_ordering(114) 00:11:22.005 fused_ordering(115) 00:11:22.005 fused_ordering(116) 00:11:22.005 fused_ordering(117) 00:11:22.005 fused_ordering(118) 00:11:22.005 fused_ordering(119) 00:11:22.005 fused_ordering(120) 00:11:22.005 fused_ordering(121) 00:11:22.005 fused_ordering(122) 00:11:22.005 fused_ordering(123) 00:11:22.005 fused_ordering(124) 00:11:22.005 fused_ordering(125) 00:11:22.005 fused_ordering(126) 00:11:22.005 fused_ordering(127) 00:11:22.005 fused_ordering(128) 00:11:22.005 fused_ordering(129) 00:11:22.005 fused_ordering(130) 00:11:22.005 fused_ordering(131) 00:11:22.005 fused_ordering(132) 00:11:22.005 fused_ordering(133) 00:11:22.005 fused_ordering(134) 00:11:22.005 fused_ordering(135) 00:11:22.005 fused_ordering(136) 00:11:22.005 fused_ordering(137) 00:11:22.005 fused_ordering(138) 00:11:22.005 fused_ordering(139) 00:11:22.005 fused_ordering(140) 00:11:22.005 fused_ordering(141) 00:11:22.005 fused_ordering(142) 00:11:22.005 fused_ordering(143) 00:11:22.005 fused_ordering(144) 00:11:22.005 fused_ordering(145) 00:11:22.005 fused_ordering(146) 00:11:22.005 fused_ordering(147) 00:11:22.005 fused_ordering(148) 00:11:22.005 fused_ordering(149) 00:11:22.005 fused_ordering(150) 00:11:22.005 fused_ordering(151) 00:11:22.005 fused_ordering(152) 00:11:22.005 fused_ordering(153) 00:11:22.005 fused_ordering(154) 00:11:22.005 fused_ordering(155) 00:11:22.005 fused_ordering(156) 00:11:22.005 fused_ordering(157) 00:11:22.005 fused_ordering(158) 00:11:22.005 fused_ordering(159) 00:11:22.006 fused_ordering(160) 00:11:22.006 fused_ordering(161) 00:11:22.006 fused_ordering(162) 00:11:22.006 fused_ordering(163) 00:11:22.006 fused_ordering(164) 00:11:22.006 fused_ordering(165) 00:11:22.006 fused_ordering(166) 00:11:22.006 fused_ordering(167) 00:11:22.006 fused_ordering(168) 00:11:22.006 fused_ordering(169) 00:11:22.006 fused_ordering(170) 00:11:22.006 fused_ordering(171) 00:11:22.006 fused_ordering(172) 00:11:22.006 fused_ordering(173) 00:11:22.006 fused_ordering(174) 00:11:22.006 fused_ordering(175) 00:11:22.006 fused_ordering(176) 00:11:22.006 fused_ordering(177) 00:11:22.006 fused_ordering(178) 00:11:22.006 fused_ordering(179) 00:11:22.006 fused_ordering(180) 00:11:22.006 fused_ordering(181) 00:11:22.006 fused_ordering(182) 00:11:22.006 fused_ordering(183) 00:11:22.006 fused_ordering(184) 00:11:22.006 fused_ordering(185) 00:11:22.006 fused_ordering(186) 00:11:22.006 fused_ordering(187) 00:11:22.006 fused_ordering(188) 00:11:22.006 fused_ordering(189) 00:11:22.006 fused_ordering(190) 00:11:22.006 fused_ordering(191) 00:11:22.006 fused_ordering(192) 00:11:22.006 fused_ordering(193) 00:11:22.006 fused_ordering(194) 00:11:22.006 fused_ordering(195) 00:11:22.006 fused_ordering(196) 00:11:22.006 fused_ordering(197) 00:11:22.006 fused_ordering(198) 00:11:22.006 fused_ordering(199) 00:11:22.006 fused_ordering(200) 00:11:22.006 fused_ordering(201) 00:11:22.006 fused_ordering(202) 00:11:22.006 fused_ordering(203) 00:11:22.006 fused_ordering(204) 00:11:22.006 fused_ordering(205) 00:11:22.264 fused_ordering(206) 00:11:22.264 fused_ordering(207) 00:11:22.264 fused_ordering(208) 00:11:22.264 fused_ordering(209) 00:11:22.264 fused_ordering(210) 00:11:22.264 fused_ordering(211) 00:11:22.264 fused_ordering(212) 00:11:22.264 fused_ordering(213) 00:11:22.264 fused_ordering(214) 00:11:22.264 fused_ordering(215) 00:11:22.264 fused_ordering(216) 00:11:22.264 fused_ordering(217) 00:11:22.264 fused_ordering(218) 00:11:22.264 fused_ordering(219) 00:11:22.264 fused_ordering(220) 00:11:22.264 fused_ordering(221) 00:11:22.264 fused_ordering(222) 00:11:22.264 fused_ordering(223) 00:11:22.264 fused_ordering(224) 00:11:22.264 fused_ordering(225) 00:11:22.264 fused_ordering(226) 00:11:22.264 fused_ordering(227) 00:11:22.264 fused_ordering(228) 00:11:22.264 fused_ordering(229) 00:11:22.264 fused_ordering(230) 00:11:22.264 fused_ordering(231) 00:11:22.264 fused_ordering(232) 00:11:22.264 fused_ordering(233) 00:11:22.264 fused_ordering(234) 00:11:22.264 fused_ordering(235) 00:11:22.264 fused_ordering(236) 00:11:22.264 fused_ordering(237) 00:11:22.264 fused_ordering(238) 00:11:22.264 fused_ordering(239) 00:11:22.264 fused_ordering(240) 00:11:22.264 fused_ordering(241) 00:11:22.264 fused_ordering(242) 00:11:22.264 fused_ordering(243) 00:11:22.264 fused_ordering(244) 00:11:22.264 fused_ordering(245) 00:11:22.264 fused_ordering(246) 00:11:22.264 fused_ordering(247) 00:11:22.264 fused_ordering(248) 00:11:22.264 fused_ordering(249) 00:11:22.264 fused_ordering(250) 00:11:22.264 fused_ordering(251) 00:11:22.264 fused_ordering(252) 00:11:22.264 fused_ordering(253) 00:11:22.264 fused_ordering(254) 00:11:22.264 fused_ordering(255) 00:11:22.264 fused_ordering(256) 00:11:22.264 fused_ordering(257) 00:11:22.264 fused_ordering(258) 00:11:22.264 fused_ordering(259) 00:11:22.264 fused_ordering(260) 00:11:22.264 fused_ordering(261) 00:11:22.264 fused_ordering(262) 00:11:22.264 fused_ordering(263) 00:11:22.264 fused_ordering(264) 00:11:22.264 fused_ordering(265) 00:11:22.264 fused_ordering(266) 00:11:22.264 fused_ordering(267) 00:11:22.264 fused_ordering(268) 00:11:22.264 fused_ordering(269) 00:11:22.264 fused_ordering(270) 00:11:22.264 fused_ordering(271) 00:11:22.264 fused_ordering(272) 00:11:22.264 fused_ordering(273) 00:11:22.264 fused_ordering(274) 00:11:22.264 fused_ordering(275) 00:11:22.264 fused_ordering(276) 00:11:22.264 fused_ordering(277) 00:11:22.264 fused_ordering(278) 00:11:22.264 fused_ordering(279) 00:11:22.264 fused_ordering(280) 00:11:22.264 fused_ordering(281) 00:11:22.264 fused_ordering(282) 00:11:22.264 fused_ordering(283) 00:11:22.264 fused_ordering(284) 00:11:22.264 fused_ordering(285) 00:11:22.264 fused_ordering(286) 00:11:22.264 fused_ordering(287) 00:11:22.264 fused_ordering(288) 00:11:22.264 fused_ordering(289) 00:11:22.264 fused_ordering(290) 00:11:22.264 fused_ordering(291) 00:11:22.264 fused_ordering(292) 00:11:22.264 fused_ordering(293) 00:11:22.264 fused_ordering(294) 00:11:22.264 fused_ordering(295) 00:11:22.264 fused_ordering(296) 00:11:22.264 fused_ordering(297) 00:11:22.264 fused_ordering(298) 00:11:22.264 fused_ordering(299) 00:11:22.264 fused_ordering(300) 00:11:22.264 fused_ordering(301) 00:11:22.264 fused_ordering(302) 00:11:22.264 fused_ordering(303) 00:11:22.264 fused_ordering(304) 00:11:22.264 fused_ordering(305) 00:11:22.264 fused_ordering(306) 00:11:22.264 fused_ordering(307) 00:11:22.264 fused_ordering(308) 00:11:22.264 fused_ordering(309) 00:11:22.264 fused_ordering(310) 00:11:22.264 fused_ordering(311) 00:11:22.264 fused_ordering(312) 00:11:22.264 fused_ordering(313) 00:11:22.264 fused_ordering(314) 00:11:22.264 fused_ordering(315) 00:11:22.264 fused_ordering(316) 00:11:22.264 fused_ordering(317) 00:11:22.264 fused_ordering(318) 00:11:22.264 fused_ordering(319) 00:11:22.264 fused_ordering(320) 00:11:22.264 fused_ordering(321) 00:11:22.264 fused_ordering(322) 00:11:22.264 fused_ordering(323) 00:11:22.264 fused_ordering(324) 00:11:22.264 fused_ordering(325) 00:11:22.264 fused_ordering(326) 00:11:22.264 fused_ordering(327) 00:11:22.264 fused_ordering(328) 00:11:22.264 fused_ordering(329) 00:11:22.264 fused_ordering(330) 00:11:22.264 fused_ordering(331) 00:11:22.264 fused_ordering(332) 00:11:22.264 fused_ordering(333) 00:11:22.264 fused_ordering(334) 00:11:22.264 fused_ordering(335) 00:11:22.264 fused_ordering(336) 00:11:22.264 fused_ordering(337) 00:11:22.264 fused_ordering(338) 00:11:22.264 fused_ordering(339) 00:11:22.264 fused_ordering(340) 00:11:22.264 fused_ordering(341) 00:11:22.264 fused_ordering(342) 00:11:22.264 fused_ordering(343) 00:11:22.264 fused_ordering(344) 00:11:22.264 fused_ordering(345) 00:11:22.264 fused_ordering(346) 00:11:22.264 fused_ordering(347) 00:11:22.264 fused_ordering(348) 00:11:22.264 fused_ordering(349) 00:11:22.264 fused_ordering(350) 00:11:22.264 fused_ordering(351) 00:11:22.264 fused_ordering(352) 00:11:22.264 fused_ordering(353) 00:11:22.264 fused_ordering(354) 00:11:22.264 fused_ordering(355) 00:11:22.264 fused_ordering(356) 00:11:22.264 fused_ordering(357) 00:11:22.264 fused_ordering(358) 00:11:22.264 fused_ordering(359) 00:11:22.264 fused_ordering(360) 00:11:22.264 fused_ordering(361) 00:11:22.264 fused_ordering(362) 00:11:22.264 fused_ordering(363) 00:11:22.264 fused_ordering(364) 00:11:22.264 fused_ordering(365) 00:11:22.264 fused_ordering(366) 00:11:22.264 fused_ordering(367) 00:11:22.264 fused_ordering(368) 00:11:22.264 fused_ordering(369) 00:11:22.264 fused_ordering(370) 00:11:22.264 fused_ordering(371) 00:11:22.264 fused_ordering(372) 00:11:22.264 fused_ordering(373) 00:11:22.264 fused_ordering(374) 00:11:22.264 fused_ordering(375) 00:11:22.264 fused_ordering(376) 00:11:22.264 fused_ordering(377) 00:11:22.264 fused_ordering(378) 00:11:22.264 fused_ordering(379) 00:11:22.264 fused_ordering(380) 00:11:22.264 fused_ordering(381) 00:11:22.264 fused_ordering(382) 00:11:22.264 fused_ordering(383) 00:11:22.264 fused_ordering(384) 00:11:22.264 fused_ordering(385) 00:11:22.264 fused_ordering(386) 00:11:22.264 fused_ordering(387) 00:11:22.264 fused_ordering(388) 00:11:22.265 fused_ordering(389) 00:11:22.265 fused_ordering(390) 00:11:22.265 fused_ordering(391) 00:11:22.265 fused_ordering(392) 00:11:22.265 fused_ordering(393) 00:11:22.265 fused_ordering(394) 00:11:22.265 fused_ordering(395) 00:11:22.265 fused_ordering(396) 00:11:22.265 fused_ordering(397) 00:11:22.265 fused_ordering(398) 00:11:22.265 fused_ordering(399) 00:11:22.265 fused_ordering(400) 00:11:22.265 fused_ordering(401) 00:11:22.265 fused_ordering(402) 00:11:22.265 fused_ordering(403) 00:11:22.265 fused_ordering(404) 00:11:22.265 fused_ordering(405) 00:11:22.265 fused_ordering(406) 00:11:22.265 fused_ordering(407) 00:11:22.265 fused_ordering(408) 00:11:22.265 fused_ordering(409) 00:11:22.265 fused_ordering(410) 00:11:22.830 fused_ordering(411) 00:11:22.830 fused_ordering(412) 00:11:22.830 fused_ordering(413) 00:11:22.830 fused_ordering(414) 00:11:22.830 fused_ordering(415) 00:11:22.830 fused_ordering(416) 00:11:22.830 fused_ordering(417) 00:11:22.830 fused_ordering(418) 00:11:22.830 fused_ordering(419) 00:11:22.830 fused_ordering(420) 00:11:22.830 fused_ordering(421) 00:11:22.830 fused_ordering(422) 00:11:22.830 fused_ordering(423) 00:11:22.830 fused_ordering(424) 00:11:22.830 fused_ordering(425) 00:11:22.830 fused_ordering(426) 00:11:22.830 fused_ordering(427) 00:11:22.830 fused_ordering(428) 00:11:22.830 fused_ordering(429) 00:11:22.830 fused_ordering(430) 00:11:22.830 fused_ordering(431) 00:11:22.830 fused_ordering(432) 00:11:22.830 fused_ordering(433) 00:11:22.830 fused_ordering(434) 00:11:22.830 fused_ordering(435) 00:11:22.830 fused_ordering(436) 00:11:22.830 fused_ordering(437) 00:11:22.830 fused_ordering(438) 00:11:22.830 fused_ordering(439) 00:11:22.830 fused_ordering(440) 00:11:22.831 fused_ordering(441) 00:11:22.831 fused_ordering(442) 00:11:22.831 fused_ordering(443) 00:11:22.831 fused_ordering(444) 00:11:22.831 fused_ordering(445) 00:11:22.831 fused_ordering(446) 00:11:22.831 fused_ordering(447) 00:11:22.831 fused_ordering(448) 00:11:22.831 fused_ordering(449) 00:11:22.831 fused_ordering(450) 00:11:22.831 fused_ordering(451) 00:11:22.831 fused_ordering(452) 00:11:22.831 fused_ordering(453) 00:11:22.831 fused_ordering(454) 00:11:22.831 fused_ordering(455) 00:11:22.831 fused_ordering(456) 00:11:22.831 fused_ordering(457) 00:11:22.831 fused_ordering(458) 00:11:22.831 fused_ordering(459) 00:11:22.831 fused_ordering(460) 00:11:22.831 fused_ordering(461) 00:11:22.831 fused_ordering(462) 00:11:22.831 fused_ordering(463) 00:11:22.831 fused_ordering(464) 00:11:22.831 fused_ordering(465) 00:11:22.831 fused_ordering(466) 00:11:22.831 fused_ordering(467) 00:11:22.831 fused_ordering(468) 00:11:22.831 fused_ordering(469) 00:11:22.831 fused_ordering(470) 00:11:22.831 fused_ordering(471) 00:11:22.831 fused_ordering(472) 00:11:22.831 fused_ordering(473) 00:11:22.831 fused_ordering(474) 00:11:22.831 fused_ordering(475) 00:11:22.831 fused_ordering(476) 00:11:22.831 fused_ordering(477) 00:11:22.831 fused_ordering(478) 00:11:22.831 fused_ordering(479) 00:11:22.831 fused_ordering(480) 00:11:22.831 fused_ordering(481) 00:11:22.831 fused_ordering(482) 00:11:22.831 fused_ordering(483) 00:11:22.831 fused_ordering(484) 00:11:22.831 fused_ordering(485) 00:11:22.831 fused_ordering(486) 00:11:22.831 fused_ordering(487) 00:11:22.831 fused_ordering(488) 00:11:22.831 fused_ordering(489) 00:11:22.831 fused_ordering(490) 00:11:22.831 fused_ordering(491) 00:11:22.831 fused_ordering(492) 00:11:22.831 fused_ordering(493) 00:11:22.831 fused_ordering(494) 00:11:22.831 fused_ordering(495) 00:11:22.831 fused_ordering(496) 00:11:22.831 fused_ordering(497) 00:11:22.831 fused_ordering(498) 00:11:22.831 fused_ordering(499) 00:11:22.831 fused_ordering(500) 00:11:22.831 fused_ordering(501) 00:11:22.831 fused_ordering(502) 00:11:22.831 fused_ordering(503) 00:11:22.831 fused_ordering(504) 00:11:22.831 fused_ordering(505) 00:11:22.831 fused_ordering(506) 00:11:22.831 fused_ordering(507) 00:11:22.831 fused_ordering(508) 00:11:22.831 fused_ordering(509) 00:11:22.831 fused_ordering(510) 00:11:22.831 fused_ordering(511) 00:11:22.831 fused_ordering(512) 00:11:22.831 fused_ordering(513) 00:11:22.831 fused_ordering(514) 00:11:22.831 fused_ordering(515) 00:11:22.831 fused_ordering(516) 00:11:22.831 fused_ordering(517) 00:11:22.831 fused_ordering(518) 00:11:22.831 fused_ordering(519) 00:11:22.831 fused_ordering(520) 00:11:22.831 fused_ordering(521) 00:11:22.831 fused_ordering(522) 00:11:22.831 fused_ordering(523) 00:11:22.831 fused_ordering(524) 00:11:22.831 fused_ordering(525) 00:11:22.831 fused_ordering(526) 00:11:22.831 fused_ordering(527) 00:11:22.831 fused_ordering(528) 00:11:22.831 fused_ordering(529) 00:11:22.831 fused_ordering(530) 00:11:22.831 fused_ordering(531) 00:11:22.831 fused_ordering(532) 00:11:22.831 fused_ordering(533) 00:11:22.831 fused_ordering(534) 00:11:22.831 fused_ordering(535) 00:11:22.831 fused_ordering(536) 00:11:22.831 fused_ordering(537) 00:11:22.831 fused_ordering(538) 00:11:22.831 fused_ordering(539) 00:11:22.831 fused_ordering(540) 00:11:22.831 fused_ordering(541) 00:11:22.831 fused_ordering(542) 00:11:22.831 fused_ordering(543) 00:11:22.831 fused_ordering(544) 00:11:22.831 fused_ordering(545) 00:11:22.831 fused_ordering(546) 00:11:22.831 fused_ordering(547) 00:11:22.831 fused_ordering(548) 00:11:22.831 fused_ordering(549) 00:11:22.831 fused_ordering(550) 00:11:22.831 fused_ordering(551) 00:11:22.831 fused_ordering(552) 00:11:22.831 fused_ordering(553) 00:11:22.831 fused_ordering(554) 00:11:22.831 fused_ordering(555) 00:11:22.831 fused_ordering(556) 00:11:22.831 fused_ordering(557) 00:11:22.831 fused_ordering(558) 00:11:22.831 fused_ordering(559) 00:11:22.831 fused_ordering(560) 00:11:22.831 fused_ordering(561) 00:11:22.831 fused_ordering(562) 00:11:22.831 fused_ordering(563) 00:11:22.831 fused_ordering(564) 00:11:22.831 fused_ordering(565) 00:11:22.831 fused_ordering(566) 00:11:22.831 fused_ordering(567) 00:11:22.831 fused_ordering(568) 00:11:22.831 fused_ordering(569) 00:11:22.831 fused_ordering(570) 00:11:22.831 fused_ordering(571) 00:11:22.831 fused_ordering(572) 00:11:22.831 fused_ordering(573) 00:11:22.831 fused_ordering(574) 00:11:22.831 fused_ordering(575) 00:11:22.831 fused_ordering(576) 00:11:22.831 fused_ordering(577) 00:11:22.831 fused_ordering(578) 00:11:22.831 fused_ordering(579) 00:11:22.831 fused_ordering(580) 00:11:22.831 fused_ordering(581) 00:11:22.831 fused_ordering(582) 00:11:22.831 fused_ordering(583) 00:11:22.831 fused_ordering(584) 00:11:22.831 fused_ordering(585) 00:11:22.831 fused_ordering(586) 00:11:22.831 fused_ordering(587) 00:11:22.831 fused_ordering(588) 00:11:22.831 fused_ordering(589) 00:11:22.831 fused_ordering(590) 00:11:22.831 fused_ordering(591) 00:11:22.831 fused_ordering(592) 00:11:22.831 fused_ordering(593) 00:11:22.831 fused_ordering(594) 00:11:22.831 fused_ordering(595) 00:11:22.831 fused_ordering(596) 00:11:22.831 fused_ordering(597) 00:11:22.831 fused_ordering(598) 00:11:22.831 fused_ordering(599) 00:11:22.831 fused_ordering(600) 00:11:22.831 fused_ordering(601) 00:11:22.831 fused_ordering(602) 00:11:22.831 fused_ordering(603) 00:11:22.831 fused_ordering(604) 00:11:22.831 fused_ordering(605) 00:11:22.831 fused_ordering(606) 00:11:22.831 fused_ordering(607) 00:11:22.831 fused_ordering(608) 00:11:22.831 fused_ordering(609) 00:11:22.831 fused_ordering(610) 00:11:22.831 fused_ordering(611) 00:11:22.831 fused_ordering(612) 00:11:22.831 fused_ordering(613) 00:11:22.831 fused_ordering(614) 00:11:22.831 fused_ordering(615) 00:11:23.398 fused_ordering(616) 00:11:23.398 fused_ordering(617) 00:11:23.398 fused_ordering(618) 00:11:23.398 fused_ordering(619) 00:11:23.398 fused_ordering(620) 00:11:23.398 fused_ordering(621) 00:11:23.398 fused_ordering(622) 00:11:23.398 fused_ordering(623) 00:11:23.398 fused_ordering(624) 00:11:23.398 fused_ordering(625) 00:11:23.398 fused_ordering(626) 00:11:23.398 fused_ordering(627) 00:11:23.398 fused_ordering(628) 00:11:23.398 fused_ordering(629) 00:11:23.398 fused_ordering(630) 00:11:23.398 fused_ordering(631) 00:11:23.398 fused_ordering(632) 00:11:23.398 fused_ordering(633) 00:11:23.398 fused_ordering(634) 00:11:23.398 fused_ordering(635) 00:11:23.398 fused_ordering(636) 00:11:23.398 fused_ordering(637) 00:11:23.398 fused_ordering(638) 00:11:23.398 fused_ordering(639) 00:11:23.398 fused_ordering(640) 00:11:23.398 fused_ordering(641) 00:11:23.398 fused_ordering(642) 00:11:23.398 fused_ordering(643) 00:11:23.398 fused_ordering(644) 00:11:23.398 fused_ordering(645) 00:11:23.398 fused_ordering(646) 00:11:23.398 fused_ordering(647) 00:11:23.398 fused_ordering(648) 00:11:23.398 fused_ordering(649) 00:11:23.398 fused_ordering(650) 00:11:23.398 fused_ordering(651) 00:11:23.398 fused_ordering(652) 00:11:23.398 fused_ordering(653) 00:11:23.398 fused_ordering(654) 00:11:23.398 fused_ordering(655) 00:11:23.398 fused_ordering(656) 00:11:23.398 fused_ordering(657) 00:11:23.398 fused_ordering(658) 00:11:23.398 fused_ordering(659) 00:11:23.398 fused_ordering(660) 00:11:23.398 fused_ordering(661) 00:11:23.398 fused_ordering(662) 00:11:23.398 fused_ordering(663) 00:11:23.398 fused_ordering(664) 00:11:23.398 fused_ordering(665) 00:11:23.398 fused_ordering(666) 00:11:23.398 fused_ordering(667) 00:11:23.398 fused_ordering(668) 00:11:23.398 fused_ordering(669) 00:11:23.398 fused_ordering(670) 00:11:23.398 fused_ordering(671) 00:11:23.398 fused_ordering(672) 00:11:23.398 fused_ordering(673) 00:11:23.398 fused_ordering(674) 00:11:23.398 fused_ordering(675) 00:11:23.398 fused_ordering(676) 00:11:23.398 fused_ordering(677) 00:11:23.398 fused_ordering(678) 00:11:23.398 fused_ordering(679) 00:11:23.398 fused_ordering(680) 00:11:23.398 fused_ordering(681) 00:11:23.398 fused_ordering(682) 00:11:23.398 fused_ordering(683) 00:11:23.398 fused_ordering(684) 00:11:23.398 fused_ordering(685) 00:11:23.398 fused_ordering(686) 00:11:23.398 fused_ordering(687) 00:11:23.398 fused_ordering(688) 00:11:23.398 fused_ordering(689) 00:11:23.398 fused_ordering(690) 00:11:23.398 fused_ordering(691) 00:11:23.398 fused_ordering(692) 00:11:23.398 fused_ordering(693) 00:11:23.398 fused_ordering(694) 00:11:23.398 fused_ordering(695) 00:11:23.398 fused_ordering(696) 00:11:23.398 fused_ordering(697) 00:11:23.398 fused_ordering(698) 00:11:23.398 fused_ordering(699) 00:11:23.398 fused_ordering(700) 00:11:23.398 fused_ordering(701) 00:11:23.398 fused_ordering(702) 00:11:23.398 fused_ordering(703) 00:11:23.398 fused_ordering(704) 00:11:23.398 fused_ordering(705) 00:11:23.398 fused_ordering(706) 00:11:23.398 fused_ordering(707) 00:11:23.398 fused_ordering(708) 00:11:23.398 fused_ordering(709) 00:11:23.398 fused_ordering(710) 00:11:23.398 fused_ordering(711) 00:11:23.398 fused_ordering(712) 00:11:23.398 fused_ordering(713) 00:11:23.398 fused_ordering(714) 00:11:23.398 fused_ordering(715) 00:11:23.398 fused_ordering(716) 00:11:23.398 fused_ordering(717) 00:11:23.398 fused_ordering(718) 00:11:23.398 fused_ordering(719) 00:11:23.398 fused_ordering(720) 00:11:23.398 fused_ordering(721) 00:11:23.398 fused_ordering(722) 00:11:23.398 fused_ordering(723) 00:11:23.398 fused_ordering(724) 00:11:23.398 fused_ordering(725) 00:11:23.398 fused_ordering(726) 00:11:23.398 fused_ordering(727) 00:11:23.398 fused_ordering(728) 00:11:23.398 fused_ordering(729) 00:11:23.398 fused_ordering(730) 00:11:23.398 fused_ordering(731) 00:11:23.398 fused_ordering(732) 00:11:23.398 fused_ordering(733) 00:11:23.398 fused_ordering(734) 00:11:23.398 fused_ordering(735) 00:11:23.398 fused_ordering(736) 00:11:23.398 fused_ordering(737) 00:11:23.399 fused_ordering(738) 00:11:23.399 fused_ordering(739) 00:11:23.399 fused_ordering(740) 00:11:23.399 fused_ordering(741) 00:11:23.399 fused_ordering(742) 00:11:23.399 fused_ordering(743) 00:11:23.399 fused_ordering(744) 00:11:23.399 fused_ordering(745) 00:11:23.399 fused_ordering(746) 00:11:23.399 fused_ordering(747) 00:11:23.399 fused_ordering(748) 00:11:23.399 fused_ordering(749) 00:11:23.399 fused_ordering(750) 00:11:23.399 fused_ordering(751) 00:11:23.399 fused_ordering(752) 00:11:23.399 fused_ordering(753) 00:11:23.399 fused_ordering(754) 00:11:23.399 fused_ordering(755) 00:11:23.399 fused_ordering(756) 00:11:23.399 fused_ordering(757) 00:11:23.399 fused_ordering(758) 00:11:23.399 fused_ordering(759) 00:11:23.399 fused_ordering(760) 00:11:23.399 fused_ordering(761) 00:11:23.399 fused_ordering(762) 00:11:23.399 fused_ordering(763) 00:11:23.399 fused_ordering(764) 00:11:23.399 fused_ordering(765) 00:11:23.399 fused_ordering(766) 00:11:23.399 fused_ordering(767) 00:11:23.399 fused_ordering(768) 00:11:23.399 fused_ordering(769) 00:11:23.399 fused_ordering(770) 00:11:23.399 fused_ordering(771) 00:11:23.399 fused_ordering(772) 00:11:23.399 fused_ordering(773) 00:11:23.399 fused_ordering(774) 00:11:23.399 fused_ordering(775) 00:11:23.399 fused_ordering(776) 00:11:23.399 fused_ordering(777) 00:11:23.399 fused_ordering(778) 00:11:23.399 fused_ordering(779) 00:11:23.399 fused_ordering(780) 00:11:23.399 fused_ordering(781) 00:11:23.399 fused_ordering(782) 00:11:23.399 fused_ordering(783) 00:11:23.399 fused_ordering(784) 00:11:23.399 fused_ordering(785) 00:11:23.399 fused_ordering(786) 00:11:23.399 fused_ordering(787) 00:11:23.399 fused_ordering(788) 00:11:23.399 fused_ordering(789) 00:11:23.399 fused_ordering(790) 00:11:23.399 fused_ordering(791) 00:11:23.399 fused_ordering(792) 00:11:23.399 fused_ordering(793) 00:11:23.399 fused_ordering(794) 00:11:23.399 fused_ordering(795) 00:11:23.399 fused_ordering(796) 00:11:23.399 fused_ordering(797) 00:11:23.399 fused_ordering(798) 00:11:23.399 fused_ordering(799) 00:11:23.399 fused_ordering(800) 00:11:23.399 fused_ordering(801) 00:11:23.399 fused_ordering(802) 00:11:23.399 fused_ordering(803) 00:11:23.399 fused_ordering(804) 00:11:23.399 fused_ordering(805) 00:11:23.399 fused_ordering(806) 00:11:23.399 fused_ordering(807) 00:11:23.399 fused_ordering(808) 00:11:23.399 fused_ordering(809) 00:11:23.399 fused_ordering(810) 00:11:23.399 fused_ordering(811) 00:11:23.399 fused_ordering(812) 00:11:23.399 fused_ordering(813) 00:11:23.399 fused_ordering(814) 00:11:23.399 fused_ordering(815) 00:11:23.399 fused_ordering(816) 00:11:23.399 fused_ordering(817) 00:11:23.399 fused_ordering(818) 00:11:23.399 fused_ordering(819) 00:11:23.399 fused_ordering(820) 00:11:24.334 fused_ordering(821) 00:11:24.334 fused_ordering(822) 00:11:24.334 fused_ordering(823) 00:11:24.334 fused_ordering(824) 00:11:24.334 fused_ordering(825) 00:11:24.334 fused_ordering(826) 00:11:24.334 fused_ordering(827) 00:11:24.334 fused_ordering(828) 00:11:24.334 fused_ordering(829) 00:11:24.334 fused_ordering(830) 00:11:24.334 fused_ordering(831) 00:11:24.334 fused_ordering(832) 00:11:24.334 fused_ordering(833) 00:11:24.334 fused_ordering(834) 00:11:24.334 fused_ordering(835) 00:11:24.334 fused_ordering(836) 00:11:24.334 fused_ordering(837) 00:11:24.334 fused_ordering(838) 00:11:24.334 fused_ordering(839) 00:11:24.334 fused_ordering(840) 00:11:24.334 fused_ordering(841) 00:11:24.334 fused_ordering(842) 00:11:24.334 fused_ordering(843) 00:11:24.334 fused_ordering(844) 00:11:24.334 fused_ordering(845) 00:11:24.334 fused_ordering(846) 00:11:24.334 fused_ordering(847) 00:11:24.334 fused_ordering(848) 00:11:24.334 fused_ordering(849) 00:11:24.334 fused_ordering(850) 00:11:24.334 fused_ordering(851) 00:11:24.334 fused_ordering(852) 00:11:24.334 fused_ordering(853) 00:11:24.334 fused_ordering(854) 00:11:24.334 fused_ordering(855) 00:11:24.334 fused_ordering(856) 00:11:24.334 fused_ordering(857) 00:11:24.334 fused_ordering(858) 00:11:24.334 fused_ordering(859) 00:11:24.334 fused_ordering(860) 00:11:24.334 fused_ordering(861) 00:11:24.334 fused_ordering(862) 00:11:24.334 fused_ordering(863) 00:11:24.334 fused_ordering(864) 00:11:24.334 fused_ordering(865) 00:11:24.334 fused_ordering(866) 00:11:24.334 fused_ordering(867) 00:11:24.334 fused_ordering(868) 00:11:24.334 fused_ordering(869) 00:11:24.334 fused_ordering(870) 00:11:24.334 fused_ordering(871) 00:11:24.334 fused_ordering(872) 00:11:24.334 fused_ordering(873) 00:11:24.334 fused_ordering(874) 00:11:24.334 fused_ordering(875) 00:11:24.334 fused_ordering(876) 00:11:24.334 fused_ordering(877) 00:11:24.341 fused_ordering(878) 00:11:24.341 fused_ordering(879) 00:11:24.341 fused_ordering(880) 00:11:24.341 fused_ordering(881) 00:11:24.341 fused_ordering(882) 00:11:24.341 fused_ordering(883) 00:11:24.341 fused_ordering(884) 00:11:24.341 fused_ordering(885) 00:11:24.341 fused_ordering(886) 00:11:24.341 fused_ordering(887) 00:11:24.341 fused_ordering(888) 00:11:24.341 fused_ordering(889) 00:11:24.341 fused_ordering(890) 00:11:24.341 fused_ordering(891) 00:11:24.341 fused_ordering(892) 00:11:24.341 fused_ordering(893) 00:11:24.341 fused_ordering(894) 00:11:24.341 fused_ordering(895) 00:11:24.341 fused_ordering(896) 00:11:24.341 fused_ordering(897) 00:11:24.341 fused_ordering(898) 00:11:24.341 fused_ordering(899) 00:11:24.341 fused_ordering(900) 00:11:24.341 fused_ordering(901) 00:11:24.341 fused_ordering(902) 00:11:24.341 fused_ordering(903) 00:11:24.341 fused_ordering(904) 00:11:24.341 fused_ordering(905) 00:11:24.341 fused_ordering(906) 00:11:24.341 fused_ordering(907) 00:11:24.341 fused_ordering(908) 00:11:24.341 fused_ordering(909) 00:11:24.341 fused_ordering(910) 00:11:24.341 fused_ordering(911) 00:11:24.341 fused_ordering(912) 00:11:24.341 fused_ordering(913) 00:11:24.341 fused_ordering(914) 00:11:24.341 fused_ordering(915) 00:11:24.341 fused_ordering(916) 00:11:24.341 fused_ordering(917) 00:11:24.341 fused_ordering(918) 00:11:24.341 fused_ordering(919) 00:11:24.341 fused_ordering(920) 00:11:24.341 fused_ordering(921) 00:11:24.341 fused_ordering(922) 00:11:24.341 fused_ordering(923) 00:11:24.341 fused_ordering(924) 00:11:24.341 fused_ordering(925) 00:11:24.341 fused_ordering(926) 00:11:24.341 fused_ordering(927) 00:11:24.341 fused_ordering(928) 00:11:24.341 fused_ordering(929) 00:11:24.341 fused_ordering(930) 00:11:24.341 fused_ordering(931) 00:11:24.341 fused_ordering(932) 00:11:24.341 fused_ordering(933) 00:11:24.341 fused_ordering(934) 00:11:24.341 fused_ordering(935) 00:11:24.341 fused_ordering(936) 00:11:24.341 fused_ordering(937) 00:11:24.341 fused_ordering(938) 00:11:24.341 fused_ordering(939) 00:11:24.341 fused_ordering(940) 00:11:24.341 fused_ordering(941) 00:11:24.341 fused_ordering(942) 00:11:24.341 fused_ordering(943) 00:11:24.341 fused_ordering(944) 00:11:24.341 fused_ordering(945) 00:11:24.341 fused_ordering(946) 00:11:24.341 fused_ordering(947) 00:11:24.341 fused_ordering(948) 00:11:24.341 fused_ordering(949) 00:11:24.341 fused_ordering(950) 00:11:24.341 fused_ordering(951) 00:11:24.341 fused_ordering(952) 00:11:24.341 fused_ordering(953) 00:11:24.341 fused_ordering(954) 00:11:24.341 fused_ordering(955) 00:11:24.341 fused_ordering(956) 00:11:24.341 fused_ordering(957) 00:11:24.341 fused_ordering(958) 00:11:24.341 fused_ordering(959) 00:11:24.341 fused_ordering(960) 00:11:24.341 fused_ordering(961) 00:11:24.341 fused_ordering(962) 00:11:24.341 fused_ordering(963) 00:11:24.341 fused_ordering(964) 00:11:24.341 fused_ordering(965) 00:11:24.341 fused_ordering(966) 00:11:24.341 fused_ordering(967) 00:11:24.341 fused_ordering(968) 00:11:24.341 fused_ordering(969) 00:11:24.341 fused_ordering(970) 00:11:24.341 fused_ordering(971) 00:11:24.341 fused_ordering(972) 00:11:24.341 fused_ordering(973) 00:11:24.341 fused_ordering(974) 00:11:24.341 fused_ordering(975) 00:11:24.341 fused_ordering(976) 00:11:24.341 fused_ordering(977) 00:11:24.341 fused_ordering(978) 00:11:24.341 fused_ordering(979) 00:11:24.341 fused_ordering(980) 00:11:24.341 fused_ordering(981) 00:11:24.341 fused_ordering(982) 00:11:24.341 fused_ordering(983) 00:11:24.341 fused_ordering(984) 00:11:24.341 fused_ordering(985) 00:11:24.341 fused_ordering(986) 00:11:24.341 fused_ordering(987) 00:11:24.341 fused_ordering(988) 00:11:24.341 fused_ordering(989) 00:11:24.341 fused_ordering(990) 00:11:24.341 fused_ordering(991) 00:11:24.341 fused_ordering(992) 00:11:24.341 fused_ordering(993) 00:11:24.341 fused_ordering(994) 00:11:24.341 fused_ordering(995) 00:11:24.341 fused_ordering(996) 00:11:24.341 fused_ordering(997) 00:11:24.341 fused_ordering(998) 00:11:24.341 fused_ordering(999) 00:11:24.341 fused_ordering(1000) 00:11:24.341 fused_ordering(1001) 00:11:24.341 fused_ordering(1002) 00:11:24.341 fused_ordering(1003) 00:11:24.341 fused_ordering(1004) 00:11:24.341 fused_ordering(1005) 00:11:24.341 fused_ordering(1006) 00:11:24.341 fused_ordering(1007) 00:11:24.341 fused_ordering(1008) 00:11:24.341 fused_ordering(1009) 00:11:24.341 fused_ordering(1010) 00:11:24.341 fused_ordering(1011) 00:11:24.341 fused_ordering(1012) 00:11:24.341 fused_ordering(1013) 00:11:24.341 fused_ordering(1014) 00:11:24.341 fused_ordering(1015) 00:11:24.341 fused_ordering(1016) 00:11:24.341 fused_ordering(1017) 00:11:24.341 fused_ordering(1018) 00:11:24.341 fused_ordering(1019) 00:11:24.341 fused_ordering(1020) 00:11:24.341 fused_ordering(1021) 00:11:24.341 fused_ordering(1022) 00:11:24.341 fused_ordering(1023) 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.341 rmmod nvme_tcp 00:11:24.341 rmmod nvme_fabrics 00:11:24.341 rmmod nvme_keyring 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1559199 ']' 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1559199 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1559199 ']' 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1559199 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:11:24.341 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.342 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559199 00:11:24.342 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:24.342 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:24.342 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559199' 00:11:24.342 killing process with pid 1559199 00:11:24.342 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1559199 00:11:24.342 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1559199 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.601 20:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.566 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.566 00:11:26.566 real 0m7.715s 00:11:26.566 user 0m5.390s 00:11:26.566 sys 0m3.316s 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:26.566 ************************************ 00:11:26.566 END TEST nvmf_fused_ordering 00:11:26.566 ************************************ 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.566 ************************************ 00:11:26.566 START TEST nvmf_ns_masking 00:11:26.566 ************************************ 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:26.566 * Looking for test storage... 00:11:26.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.566 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.824 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.824 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7c9b5647-7c25-4814-a1f3-a5b9728be1c1 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=98ed4552-c9b9-4de0-a6ba-e2dfd6e9854e 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=96578397-e36c-4d35-a07e-36e59288dc12 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.825 20:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:28.880 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.880 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.880 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.880 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.880 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.880 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:11:28.881 00:11:28.881 --- 10.0.0.2 ping statistics --- 00:11:28.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.881 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:11:28.881 00:11:28.881 --- 10.0.0.1 ping statistics --- 00:11:28.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.881 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.881 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1561545 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1561545 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1561545 ']' 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.882 20:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:28.882 [2024-07-24 20:39:24.297655] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:11:28.882 [2024-07-24 20:39:24.297735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.882 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.882 [2024-07-24 20:39:24.366666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.140 [2024-07-24 20:39:24.483891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.140 [2024-07-24 20:39:24.483949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.140 [2024-07-24 20:39:24.483965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.140 [2024-07-24 20:39:24.483978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.140 [2024-07-24 20:39:24.483990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.140 [2024-07-24 20:39:24.484027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.706 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.706 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:29.706 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.706 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.706 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:29.706 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.706 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:29.964 [2024-07-24 20:39:25.502116] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.964 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:29.964 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:29.964 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:30.222 Malloc1 00:11:30.481 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:30.740 Malloc2 00:11:30.740 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:30.999 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:31.258 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.517 [2024-07-24 20:39:26.881620] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.517 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:31.517 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96578397-e36c-4d35-a07e-36e59288dc12 -a 10.0.0.2 -s 4420 -i 4 00:11:31.517 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.517 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.517 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.517 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:31.517 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:34.044 [ 0]:0x1 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0969bf17d4e946ecb6b3f5ede750a005 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0969bf17d4e946ecb6b3f5ede750a005 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:34.044 [ 0]:0x1 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0969bf17d4e946ecb6b3f5ede750a005 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0969bf17d4e946ecb6b3f5ede750a005 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:34.044 [ 1]:0x2 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fac6f9fb98b244faa306cff873d91652 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fac6f9fb98b244faa306cff873d91652 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:34.044 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.302 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.560 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:34.818 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:34.818 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96578397-e36c-4d35-a07e-36e59288dc12 -a 10.0.0.2 -s 4420 -i 4 00:11:35.075 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:35.075 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.075 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.075 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:35.075 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:35.075 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:36.974 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:36.974 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:36.974 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.974 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:36.974 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:36.975 [ 0]:0x2 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:36.975 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fac6f9fb98b244faa306cff873d91652 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fac6f9fb98b244faa306cff873d91652 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:37.233 [ 0]:0x1 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.233 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0969bf17d4e946ecb6b3f5ede750a005 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0969bf17d4e946ecb6b3f5ede750a005 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:37.491 [ 1]:0x2 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fac6f9fb98b244faa306cff873d91652 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fac6f9fb98b244faa306cff873d91652 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.491 20:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:37.750 [ 0]:0x2 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:37.750 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.007 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fac6f9fb98b244faa306cff873d91652 00:11:38.007 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fac6f9fb98b244faa306cff873d91652 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.007 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:38.007 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.007 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.265 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:38.265 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96578397-e36c-4d35-a07e-36e59288dc12 -a 10.0.0.2 -s 4420 -i 4 00:11:38.523 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:38.523 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:38.523 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.523 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:38.523 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:38.523 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:40.423 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:40.423 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:40.423 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.423 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:40.423 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.423 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:40.424 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:40.424 20:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:40.682 [ 0]:0x1 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0969bf17d4e946ecb6b3f5ede750a005 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0969bf17d4e946ecb6b3f5ede750a005 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:40.682 [ 1]:0x2 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fac6f9fb98b244faa306cff873d91652 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fac6f9fb98b244faa306cff873d91652 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.682 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.940 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.198 [ 0]:0x2 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fac6f9fb98b244faa306cff873d91652 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fac6f9fb98b244faa306cff873d91652 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.198 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.199 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.199 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.199 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.199 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.199 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.199 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:41.199 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:41.457 [2024-07-24 20:39:36.771363] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:41.457 request: 00:11:41.457 { 00:11:41.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.457 "nsid": 2, 00:11:41.457 "host": "nqn.2016-06.io.spdk:host1", 00:11:41.457 "method": "nvmf_ns_remove_host", 00:11:41.457 "req_id": 1 00:11:41.457 } 00:11:41.457 Got JSON-RPC error response 00:11:41.457 response: 00:11:41.457 { 00:11:41.457 "code": -32602, 00:11:41.457 "message": "Invalid parameters" 00:11:41.457 } 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.457 [ 0]:0x2 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fac6f9fb98b244faa306cff873d91652 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fac6f9fb98b244faa306cff873d91652 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1563163 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1563163 /var/tmp/host.sock 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1563163 ']' 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:41.457 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.458 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:41.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:41.458 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.458 20:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:41.458 [2024-07-24 20:39:36.978666] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:11:41.458 [2024-07-24 20:39:36.978741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563163 ] 00:11:41.458 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.716 [2024-07-24 20:39:37.041559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.716 [2024-07-24 20:39:37.161172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.974 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.974 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:41.974 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.232 20:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:42.492 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7c9b5647-7c25-4814-a1f3-a5b9728be1c1 00:11:42.492 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:42.492 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7C9B56477C254814A1F3A5B9728BE1C1 -i 00:11:42.750 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 98ed4552-c9b9-4de0-a6ba-e2dfd6e9854e 00:11:42.750 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:42.750 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 98ED4552C9B94DE0A6BAE2DFD6E9854E -i 00:11:43.009 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.266 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:43.832 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:43.832 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:44.090 nvme0n1 00:11:44.090 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:44.090 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:44.673 nvme1n2 00:11:44.673 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:44.673 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:44.673 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:44.673 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:44.673 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:44.931 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:44.931 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:44.931 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:44.931 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:45.188 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7c9b5647-7c25-4814-a1f3-a5b9728be1c1 == \7\c\9\b\5\6\4\7\-\7\c\2\5\-\4\8\1\4\-\a\1\f\3\-\a\5\b\9\7\2\8\b\e\1\c\1 ]] 00:11:45.188 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:45.188 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:45.188 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 98ed4552-c9b9-4de0-a6ba-e2dfd6e9854e == \9\8\e\d\4\5\5\2\-\c\9\b\9\-\4\d\e\0\-\a\6\b\a\-\e\2\d\f\d\6\e\9\8\5\4\e ]] 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1563163 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1563163 ']' 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1563163 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1563163 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1563163' 00:11:45.446 killing process with pid 1563163 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1563163 00:11:45.446 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1563163 00:11:46.011 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.269 rmmod nvme_tcp 00:11:46.269 rmmod nvme_fabrics 00:11:46.269 rmmod nvme_keyring 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1561545 ']' 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1561545 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1561545 ']' 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1561545 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1561545 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1561545' 00:11:46.269 killing process with pid 1561545 00:11:46.269 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1561545 00:11:46.270 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1561545 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.527 20:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:49.066 00:11:49.066 real 0m22.043s 00:11:49.066 user 0m29.013s 00:11:49.066 sys 0m4.177s 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 ************************************ 00:11:49.066 END TEST nvmf_ns_masking 00:11:49.066 ************************************ 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 ************************************ 00:11:49.066 START TEST nvmf_nvme_cli 00:11:49.066 ************************************ 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.066 * Looking for test storage... 00:11:49.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.066 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.067 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:50.968 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.969 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.969 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:11:50.969 00:11:50.969 --- 10.0.0.2 ping statistics --- 00:11:50.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.969 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:50.969 00:11:50.969 --- 10.0.0.1 ping statistics --- 00:11:50.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.969 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1565777 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1565777 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1565777 ']' 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.969 20:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.969 [2024-07-24 20:39:46.444592] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:11:50.969 [2024-07-24 20:39:46.444681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.969 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.969 [2024-07-24 20:39:46.511175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.227 [2024-07-24 20:39:46.620055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.227 [2024-07-24 20:39:46.620118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.227 [2024-07-24 20:39:46.620132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.227 [2024-07-24 20:39:46.620142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.227 [2024-07-24 20:39:46.620151] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.227 [2024-07-24 20:39:46.620270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.227 [2024-07-24 20:39:46.620336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.227 [2024-07-24 20:39:46.620340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.227 [2024-07-24 20:39:46.620308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 [2024-07-24 20:39:47.438929] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 Malloc0 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 Malloc1 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 [2024-07-24 20:39:47.524031] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:52.160 00:11:52.160 Discovery Log Number of Records 2, Generation counter 2 00:11:52.160 =====Discovery Log Entry 0====== 00:11:52.160 trtype: tcp 00:11:52.160 adrfam: ipv4 00:11:52.160 subtype: current discovery subsystem 00:11:52.160 treq: not required 00:11:52.160 portid: 0 00:11:52.160 trsvcid: 4420 00:11:52.160 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.160 traddr: 10.0.0.2 00:11:52.160 eflags: explicit discovery connections, duplicate discovery information 00:11:52.160 sectype: none 00:11:52.160 =====Discovery Log Entry 1====== 00:11:52.160 trtype: tcp 00:11:52.160 adrfam: ipv4 00:11:52.160 subtype: nvme subsystem 00:11:52.160 treq: not required 00:11:52.160 portid: 0 00:11:52.160 trsvcid: 4420 00:11:52.160 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:52.160 traddr: 10.0.0.2 00:11:52.160 eflags: none 00:11:52.160 sectype: none 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:52.160 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.093 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:53.093 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.093 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.093 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:53.093 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:53.093 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.991 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:54.992 /dev/nvme0n1 ]] 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:54.992 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.250 rmmod nvme_tcp 00:11:55.250 rmmod nvme_fabrics 00:11:55.250 rmmod nvme_keyring 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1565777 ']' 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1565777 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1565777 ']' 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1565777 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1565777 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1565777' 00:11:55.250 killing process with pid 1565777 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1565777 00:11:55.250 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1565777 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.509 20:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.036 00:11:58.036 real 0m8.967s 00:11:58.036 user 0m18.175s 00:11:58.036 sys 0m2.268s 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.036 ************************************ 00:11:58.036 END TEST nvmf_nvme_cli 00:11:58.036 ************************************ 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.036 ************************************ 00:11:58.036 START TEST nvmf_vfio_user 00:11:58.036 ************************************ 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:58.036 * Looking for test storage... 00:11:58.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1566712 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1566712' 00:11:58.036 Process pid: 1566712 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1566712 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1566712 ']' 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:58.036 [2024-07-24 20:39:53.278119] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:11:58.036 [2024-07-24 20:39:53.278197] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.036 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.036 [2024-07-24 20:39:53.335362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.036 [2024-07-24 20:39:53.446142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.036 [2024-07-24 20:39:53.446192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.036 [2024-07-24 20:39:53.446220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.036 [2024-07-24 20:39:53.446240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.036 [2024-07-24 20:39:53.446258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.036 [2024-07-24 20:39:53.446330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.036 [2024-07-24 20:39:53.446391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.036 [2024-07-24 20:39:53.446459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.036 [2024-07-24 20:39:53.446462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:11:58.036 20:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:59.406 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:59.406 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:59.406 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:59.406 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:59.406 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:59.406 20:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:59.663 Malloc1 00:11:59.663 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:59.920 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:00.177 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:00.433 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:00.433 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:00.433 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:00.690 Malloc2 00:12:00.690 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:00.947 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:01.205 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:01.467 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:01.467 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:01.467 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:01.467 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:01.467 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:01.467 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:01.467 [2024-07-24 20:39:56.912473] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:12:01.467 [2024-07-24 20:39:56.912520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567137 ] 00:12:01.467 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.467 [2024-07-24 20:39:56.948939] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:01.467 [2024-07-24 20:39:56.958163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.467 [2024-07-24 20:39:56.958192] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffb28c29000 00:12:01.467 [2024-07-24 20:39:56.959159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.960153] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.961158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.962161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.963164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.964171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.965173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.966173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.467 [2024-07-24 20:39:56.967182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.467 [2024-07-24 20:39:56.967202] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffb28c1e000 00:12:01.467 [2024-07-24 20:39:56.968337] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.467 [2024-07-24 20:39:56.983908] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:01.467 [2024-07-24 20:39:56.983944] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:01.467 [2024-07-24 20:39:56.988331] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:01.467 [2024-07-24 20:39:56.988392] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:01.467 [2024-07-24 20:39:56.988492] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:01.467 [2024-07-24 20:39:56.988519] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:01.468 [2024-07-24 20:39:56.988546] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:01.468 [2024-07-24 20:39:56.989328] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:01.468 [2024-07-24 20:39:56.989354] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:01.468 [2024-07-24 20:39:56.989369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:01.468 [2024-07-24 20:39:56.990313] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:01.468 [2024-07-24 20:39:56.990332] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:01.468 [2024-07-24 20:39:56.990346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:01.468 [2024-07-24 20:39:56.991316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:01.468 [2024-07-24 20:39:56.991334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:01.468 [2024-07-24 20:39:56.992325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:01.468 [2024-07-24 20:39:56.992349] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:01.468 [2024-07-24 20:39:56.992360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:01.468 [2024-07-24 20:39:56.992372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:01.468 [2024-07-24 20:39:56.992481] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:01.468 [2024-07-24 20:39:56.992489] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:01.468 [2024-07-24 20:39:56.992498] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:01.468 [2024-07-24 20:39:56.993334] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:01.468 [2024-07-24 20:39:56.994332] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:01.468 [2024-07-24 20:39:56.995337] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:01.468 [2024-07-24 20:39:56.996332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.468 [2024-07-24 20:39:56.996469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:01.468 [2024-07-24 20:39:56.997348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:01.468 [2024-07-24 20:39:56.997367] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:01.468 [2024-07-24 20:39:56.997377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:01.468 [2024-07-24 20:39:56.997402] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:01.468 [2024-07-24 20:39:56.997416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:01.468 [2024-07-24 20:39:56.997440] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.468 [2024-07-24 20:39:56.997451] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.468 [2024-07-24 20:39:56.997458] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.468 [2024-07-24 20:39:56.997477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.468 [2024-07-24 20:39:56.997561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:01.468 [2024-07-24 20:39:56.997577] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:01.468 [2024-07-24 20:39:56.997585] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:01.468 [2024-07-24 20:39:56.997607] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:01.468 [2024-07-24 20:39:56.997614] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:01.468 [2024-07-24 20:39:56.997622] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:01.468 [2024-07-24 20:39:56.997633] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:01.468 [2024-07-24 20:39:56.997641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:01.468 [2024-07-24 20:39:56.997654] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:01.468 [2024-07-24 20:39:56.997673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:01.468 [2024-07-24 20:39:56.997690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:01.468 [2024-07-24 20:39:56.997712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.468 [2024-07-24 20:39:56.997726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.468 [2024-07-24 20:39:56.997738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.468 [2024-07-24 20:39:56.997750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.468 [2024-07-24 20:39:56.997758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:01.468 [2024-07-24 20:39:56.997772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:01.468 [2024-07-24 20:39:56.997786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:01.468 [2024-07-24 20:39:56.997797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:01.468 [2024-07-24 20:39:56.997807] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:01.468 [2024-07-24 20:39:56.997815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.997828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.997838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.997851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.997862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.997926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.997942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.997954] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:01.469 [2024-07-24 20:39:56.997962] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:01.469 [2024-07-24 20:39:56.997968] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.469 [2024-07-24 20:39:56.997977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.997992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.998008] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:01.469 [2024-07-24 20:39:56.998027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998053] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.469 [2024-07-24 20:39:56.998061] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.469 [2024-07-24 20:39:56.998067] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.469 [2024-07-24 20:39:56.998075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.998099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.998120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998146] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.469 [2024-07-24 20:39:56.998154] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.469 [2024-07-24 20:39:56.998160] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.469 [2024-07-24 20:39:56.998169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.998182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.998195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998284] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:01.469 [2024-07-24 20:39:56.998292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:01.469 [2024-07-24 20:39:56.998300] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:01.469 [2024-07-24 20:39:56.998332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.998351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.998371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.998383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.998400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.998415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.998432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.998444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:01.469 [2024-07-24 20:39:56.998467] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:01.469 [2024-07-24 20:39:56.998477] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:01.469 [2024-07-24 20:39:56.998484] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:01.469 [2024-07-24 20:39:56.998490] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:01.469 [2024-07-24 20:39:56.998496] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:01.469 [2024-07-24 20:39:56.998505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:01.469 [2024-07-24 20:39:56.998517] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:01.469 [2024-07-24 20:39:56.998526] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:01.469 [2024-07-24 20:39:56.998532] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.469 [2024-07-24 20:39:56.998541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:01.469 [2024-07-24 20:39:56.998567] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:01.470 [2024-07-24 20:39:56.998575] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.470 [2024-07-24 20:39:56.998580] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.470 [2024-07-24 20:39:56.998589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.470 [2024-07-24 20:39:56.998601] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:01.470 [2024-07-24 20:39:56.998609] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:01.470 [2024-07-24 20:39:56.998614] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:01.470 [2024-07-24 20:39:56.998623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:01.470 [2024-07-24 20:39:56.998633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:01.470 [2024-07-24 20:39:56.998653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:01.470 [2024-07-24 20:39:56.998672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:01.470 [2024-07-24 20:39:56.998687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:01.470 ===================================================== 00:12:01.470 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.470 ===================================================== 00:12:01.470 Controller Capabilities/Features 00:12:01.470 ================================ 00:12:01.470 Vendor ID: 4e58 00:12:01.470 Subsystem Vendor ID: 4e58 00:12:01.470 Serial Number: SPDK1 00:12:01.470 Model Number: SPDK bdev Controller 00:12:01.470 Firmware Version: 24.09 00:12:01.470 Recommended Arb Burst: 6 00:12:01.470 IEEE OUI Identifier: 8d 6b 50 00:12:01.470 Multi-path I/O 00:12:01.470 May have multiple subsystem ports: Yes 00:12:01.470 May have multiple controllers: Yes 00:12:01.470 Associated with SR-IOV VF: No 00:12:01.470 Max Data Transfer Size: 131072 00:12:01.470 Max Number of Namespaces: 32 00:12:01.470 Max Number of I/O Queues: 127 00:12:01.470 NVMe Specification Version (VS): 1.3 00:12:01.470 NVMe Specification Version (Identify): 1.3 00:12:01.470 Maximum Queue Entries: 256 00:12:01.470 Contiguous Queues Required: Yes 00:12:01.470 Arbitration Mechanisms Supported 00:12:01.470 Weighted Round Robin: Not Supported 00:12:01.470 Vendor Specific: Not Supported 00:12:01.470 Reset Timeout: 15000 ms 00:12:01.470 Doorbell Stride: 4 bytes 00:12:01.470 NVM Subsystem Reset: Not Supported 00:12:01.470 Command Sets Supported 00:12:01.470 NVM Command Set: Supported 00:12:01.470 Boot Partition: Not Supported 00:12:01.470 Memory Page Size Minimum: 4096 bytes 00:12:01.470 Memory Page Size Maximum: 4096 bytes 00:12:01.470 Persistent Memory Region: Not Supported 00:12:01.470 Optional Asynchronous Events Supported 00:12:01.470 Namespace Attribute Notices: Supported 00:12:01.470 Firmware Activation Notices: Not Supported 00:12:01.470 ANA Change Notices: Not Supported 00:12:01.470 PLE Aggregate Log Change Notices: Not Supported 00:12:01.470 LBA Status Info Alert Notices: Not Supported 00:12:01.470 EGE Aggregate Log Change Notices: Not Supported 00:12:01.470 Normal NVM Subsystem Shutdown event: Not Supported 00:12:01.470 Zone Descriptor Change Notices: Not Supported 00:12:01.470 Discovery Log Change Notices: Not Supported 00:12:01.470 Controller Attributes 00:12:01.470 128-bit Host Identifier: Supported 00:12:01.470 Non-Operational Permissive Mode: Not Supported 00:12:01.470 NVM Sets: Not Supported 00:12:01.470 Read Recovery Levels: Not Supported 00:12:01.470 Endurance Groups: Not Supported 00:12:01.470 Predictable Latency Mode: Not Supported 00:12:01.470 Traffic Based Keep ALive: Not Supported 00:12:01.470 Namespace Granularity: Not Supported 00:12:01.470 SQ Associations: Not Supported 00:12:01.470 UUID List: Not Supported 00:12:01.470 Multi-Domain Subsystem: Not Supported 00:12:01.470 Fixed Capacity Management: Not Supported 00:12:01.470 Variable Capacity Management: Not Supported 00:12:01.470 Delete Endurance Group: Not Supported 00:12:01.470 Delete NVM Set: Not Supported 00:12:01.470 Extended LBA Formats Supported: Not Supported 00:12:01.470 Flexible Data Placement Supported: Not Supported 00:12:01.470 00:12:01.470 Controller Memory Buffer Support 00:12:01.470 ================================ 00:12:01.470 Supported: No 00:12:01.470 00:12:01.470 Persistent Memory Region Support 00:12:01.470 ================================ 00:12:01.470 Supported: No 00:12:01.470 00:12:01.470 Admin Command Set Attributes 00:12:01.470 ============================ 00:12:01.470 Security Send/Receive: Not Supported 00:12:01.470 Format NVM: Not Supported 00:12:01.470 Firmware Activate/Download: Not Supported 00:12:01.470 Namespace Management: Not Supported 00:12:01.470 Device Self-Test: Not Supported 00:12:01.470 Directives: Not Supported 00:12:01.470 NVMe-MI: Not Supported 00:12:01.470 Virtualization Management: Not Supported 00:12:01.470 Doorbell Buffer Config: Not Supported 00:12:01.470 Get LBA Status Capability: Not Supported 00:12:01.470 Command & Feature Lockdown Capability: Not Supported 00:12:01.470 Abort Command Limit: 4 00:12:01.470 Async Event Request Limit: 4 00:12:01.470 Number of Firmware Slots: N/A 00:12:01.470 Firmware Slot 1 Read-Only: N/A 00:12:01.470 Firmware Activation Without Reset: N/A 00:12:01.470 Multiple Update Detection Support: N/A 00:12:01.470 Firmware Update Granularity: No Information Provided 00:12:01.470 Per-Namespace SMART Log: No 00:12:01.470 Asymmetric Namespace Access Log Page: Not Supported 00:12:01.470 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:01.470 Command Effects Log Page: Supported 00:12:01.470 Get Log Page Extended Data: Supported 00:12:01.470 Telemetry Log Pages: Not Supported 00:12:01.470 Persistent Event Log Pages: Not Supported 00:12:01.470 Supported Log Pages Log Page: May Support 00:12:01.470 Commands Supported & Effects Log Page: Not Supported 00:12:01.471 Feature Identifiers & Effects Log Page:May Support 00:12:01.471 NVMe-MI Commands & Effects Log Page: May Support 00:12:01.471 Data Area 4 for Telemetry Log: Not Supported 00:12:01.471 Error Log Page Entries Supported: 128 00:12:01.471 Keep Alive: Supported 00:12:01.471 Keep Alive Granularity: 10000 ms 00:12:01.471 00:12:01.471 NVM Command Set Attributes 00:12:01.471 ========================== 00:12:01.471 Submission Queue Entry Size 00:12:01.471 Max: 64 00:12:01.471 Min: 64 00:12:01.471 Completion Queue Entry Size 00:12:01.471 Max: 16 00:12:01.471 Min: 16 00:12:01.471 Number of Namespaces: 32 00:12:01.471 Compare Command: Supported 00:12:01.471 Write Uncorrectable Command: Not Supported 00:12:01.471 Dataset Management Command: Supported 00:12:01.471 Write Zeroes Command: Supported 00:12:01.471 Set Features Save Field: Not Supported 00:12:01.471 Reservations: Not Supported 00:12:01.471 Timestamp: Not Supported 00:12:01.471 Copy: Supported 00:12:01.471 Volatile Write Cache: Present 00:12:01.471 Atomic Write Unit (Normal): 1 00:12:01.471 Atomic Write Unit (PFail): 1 00:12:01.471 Atomic Compare & Write Unit: 1 00:12:01.471 Fused Compare & Write: Supported 00:12:01.471 Scatter-Gather List 00:12:01.471 SGL Command Set: Supported (Dword aligned) 00:12:01.471 SGL Keyed: Not Supported 00:12:01.471 SGL Bit Bucket Descriptor: Not Supported 00:12:01.471 SGL Metadata Pointer: Not Supported 00:12:01.471 Oversized SGL: Not Supported 00:12:01.471 SGL Metadata Address: Not Supported 00:12:01.471 SGL Offset: Not Supported 00:12:01.471 Transport SGL Data Block: Not Supported 00:12:01.471 Replay Protected Memory Block: Not Supported 00:12:01.471 00:12:01.471 Firmware Slot Information 00:12:01.471 ========================= 00:12:01.471 Active slot: 1 00:12:01.471 Slot 1 Firmware Revision: 24.09 00:12:01.471 00:12:01.471 00:12:01.471 Commands Supported and Effects 00:12:01.471 ============================== 00:12:01.471 Admin Commands 00:12:01.471 -------------- 00:12:01.471 Get Log Page (02h): Supported 00:12:01.471 Identify (06h): Supported 00:12:01.471 Abort (08h): Supported 00:12:01.471 Set Features (09h): Supported 00:12:01.471 Get Features (0Ah): Supported 00:12:01.471 Asynchronous Event Request (0Ch): Supported 00:12:01.471 Keep Alive (18h): Supported 00:12:01.471 I/O Commands 00:12:01.471 ------------ 00:12:01.471 Flush (00h): Supported LBA-Change 00:12:01.471 Write (01h): Supported LBA-Change 00:12:01.471 Read (02h): Supported 00:12:01.471 Compare (05h): Supported 00:12:01.471 Write Zeroes (08h): Supported LBA-Change 00:12:01.471 Dataset Management (09h): Supported LBA-Change 00:12:01.471 Copy (19h): Supported LBA-Change 00:12:01.471 00:12:01.471 Error Log 00:12:01.471 ========= 00:12:01.471 00:12:01.471 Arbitration 00:12:01.471 =========== 00:12:01.471 Arbitration Burst: 1 00:12:01.471 00:12:01.471 Power Management 00:12:01.471 ================ 00:12:01.471 Number of Power States: 1 00:12:01.471 Current Power State: Power State #0 00:12:01.471 Power State #0: 00:12:01.471 Max Power: 0.00 W 00:12:01.471 Non-Operational State: Operational 00:12:01.471 Entry Latency: Not Reported 00:12:01.471 Exit Latency: Not Reported 00:12:01.471 Relative Read Throughput: 0 00:12:01.471 Relative Read Latency: 0 00:12:01.471 Relative Write Throughput: 0 00:12:01.471 Relative Write Latency: 0 00:12:01.471 Idle Power: Not Reported 00:12:01.471 Active Power: Not Reported 00:12:01.471 Non-Operational Permissive Mode: Not Supported 00:12:01.471 00:12:01.471 Health Information 00:12:01.471 ================== 00:12:01.471 Critical Warnings: 00:12:01.471 Available Spare Space: OK 00:12:01.471 Temperature: OK 00:12:01.471 Device Reliability: OK 00:12:01.471 Read Only: No 00:12:01.471 Volatile Memory Backup: OK 00:12:01.471 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:01.471 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:01.471 Available Spare: 0% 00:12:01.471 Available Sp[2024-07-24 20:39:56.998809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:01.471 [2024-07-24 20:39:56.998825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:01.471 [2024-07-24 20:39:56.998868] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:01.471 [2024-07-24 20:39:56.998884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.471 [2024-07-24 20:39:56.998895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.471 [2024-07-24 20:39:56.998904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.471 [2024-07-24 20:39:56.998914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.471 [2024-07-24 20:39:57.000256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:01.471 [2024-07-24 20:39:57.000280] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:01.471 [2024-07-24 20:39:57.000357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.471 [2024-07-24 20:39:57.000432] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:01.472 [2024-07-24 20:39:57.000446] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:01.472 [2024-07-24 20:39:57.001370] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:01.472 [2024-07-24 20:39:57.001396] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:01.472 [2024-07-24 20:39:57.001453] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:01.472 [2024-07-24 20:39:57.005253] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.730 are Threshold: 0% 00:12:01.730 Life Percentage Used: 0% 00:12:01.730 Data Units Read: 0 00:12:01.730 Data Units Written: 0 00:12:01.730 Host Read Commands: 0 00:12:01.730 Host Write Commands: 0 00:12:01.730 Controller Busy Time: 0 minutes 00:12:01.730 Power Cycles: 0 00:12:01.730 Power On Hours: 0 hours 00:12:01.730 Unsafe Shutdowns: 0 00:12:01.730 Unrecoverable Media Errors: 0 00:12:01.730 Lifetime Error Log Entries: 0 00:12:01.730 Warning Temperature Time: 0 minutes 00:12:01.730 Critical Temperature Time: 0 minutes 00:12:01.730 00:12:01.730 Number of Queues 00:12:01.730 ================ 00:12:01.730 Number of I/O Submission Queues: 127 00:12:01.730 Number of I/O Completion Queues: 127 00:12:01.730 00:12:01.730 Active Namespaces 00:12:01.730 ================= 00:12:01.730 Namespace ID:1 00:12:01.730 Error Recovery Timeout: Unlimited 00:12:01.730 Command Set Identifier: NVM (00h) 00:12:01.730 Deallocate: Supported 00:12:01.730 Deallocated/Unwritten Error: Not Supported 00:12:01.730 Deallocated Read Value: Unknown 00:12:01.730 Deallocate in Write Zeroes: Not Supported 00:12:01.730 Deallocated Guard Field: 0xFFFF 00:12:01.730 Flush: Supported 00:12:01.730 Reservation: Supported 00:12:01.730 Namespace Sharing Capabilities: Multiple Controllers 00:12:01.730 Size (in LBAs): 131072 (0GiB) 00:12:01.730 Capacity (in LBAs): 131072 (0GiB) 00:12:01.730 Utilization (in LBAs): 131072 (0GiB) 00:12:01.730 NGUID: 3DD674A8CD0A4C2FA2FBC0F6FFC8DA36 00:12:01.730 UUID: 3dd674a8-cd0a-4c2f-a2fb-c0f6ffc8da36 00:12:01.730 Thin Provisioning: Not Supported 00:12:01.730 Per-NS Atomic Units: Yes 00:12:01.730 Atomic Boundary Size (Normal): 0 00:12:01.730 Atomic Boundary Size (PFail): 0 00:12:01.730 Atomic Boundary Offset: 0 00:12:01.730 Maximum Single Source Range Length: 65535 00:12:01.730 Maximum Copy Length: 65535 00:12:01.730 Maximum Source Range Count: 1 00:12:01.730 NGUID/EUI64 Never Reused: No 00:12:01.730 Namespace Write Protected: No 00:12:01.730 Number of LBA Formats: 1 00:12:01.730 Current LBA Format: LBA Format #00 00:12:01.730 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:01.730 00:12:01.730 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:01.730 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.730 [2024-07-24 20:39:57.234087] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:07.029 Initializing NVMe Controllers 00:12:07.029 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:07.029 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:07.029 Initialization complete. Launching workers. 00:12:07.029 ======================================================== 00:12:07.029 Latency(us) 00:12:07.029 Device Information : IOPS MiB/s Average min max 00:12:07.029 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33294.75 130.06 3844.25 1189.63 7614.79 00:12:07.029 ======================================================== 00:12:07.029 Total : 33294.75 130.06 3844.25 1189.63 7614.79 00:12:07.029 00:12:07.029 [2024-07-24 20:40:02.256443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.029 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:07.029 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.029 [2024-07-24 20:40:02.497662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:12.286 Initializing NVMe Controllers 00:12:12.286 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.286 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:12.286 Initialization complete. Launching workers. 00:12:12.286 ======================================================== 00:12:12.286 Latency(us) 00:12:12.286 Device Information : IOPS MiB/s Average min max 00:12:12.286 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.00 62.71 7979.08 6797.89 12009.42 00:12:12.286 ======================================================== 00:12:12.286 Total : 16054.00 62.71 7979.08 6797.89 12009.42 00:12:12.286 00:12:12.286 [2024-07-24 20:40:07.539777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:12.286 20:40:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:12.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.286 [2024-07-24 20:40:07.749791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.546 [2024-07-24 20:40:12.838685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:17.546 Initializing NVMe Controllers 00:12:17.546 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.546 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:17.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:17.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:17.546 Initialization complete. Launching workers. 00:12:17.546 Starting thread on core 2 00:12:17.546 Starting thread on core 3 00:12:17.546 Starting thread on core 1 00:12:17.547 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:17.547 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.805 [2024-07-24 20:40:13.152743] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.084 [2024-07-24 20:40:16.316119] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.084 Initializing NVMe Controllers 00:12:21.084 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:21.084 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:21.084 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:21.084 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:21.084 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:21.084 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:21.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:21.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:21.084 Initialization complete. Launching workers. 00:12:21.084 Starting thread on core 1 with urgent priority queue 00:12:21.084 Starting thread on core 2 with urgent priority queue 00:12:21.084 Starting thread on core 3 with urgent priority queue 00:12:21.084 Starting thread on core 0 with urgent priority queue 00:12:21.084 SPDK bdev Controller (SPDK1 ) core 0: 1891.67 IO/s 52.86 secs/100000 ios 00:12:21.084 SPDK bdev Controller (SPDK1 ) core 1: 2069.33 IO/s 48.32 secs/100000 ios 00:12:21.084 SPDK bdev Controller (SPDK1 ) core 2: 1979.00 IO/s 50.53 secs/100000 ios 00:12:21.084 SPDK bdev Controller (SPDK1 ) core 3: 1990.67 IO/s 50.23 secs/100000 ios 00:12:21.084 ======================================================== 00:12:21.084 00:12:21.084 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:21.084 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.084 [2024-07-24 20:40:16.606783] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.084 Initializing NVMe Controllers 00:12:21.084 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:21.084 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:21.084 Namespace ID: 1 size: 0GB 00:12:21.084 Initialization complete. 00:12:21.084 INFO: using host memory buffer for IO 00:12:21.084 Hello world! 00:12:21.084 [2024-07-24 20:40:16.641369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.342 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:21.342 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.599 [2024-07-24 20:40:16.936792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:22.532 Initializing NVMe Controllers 00:12:22.532 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.532 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.532 Initialization complete. Launching workers. 00:12:22.532 submit (in ns) avg, min, max = 8471.3, 3520.0, 5015238.9 00:12:22.532 complete (in ns) avg, min, max = 24551.4, 2062.2, 6009994.4 00:12:22.532 00:12:22.532 Submit histogram 00:12:22.532 ================ 00:12:22.532 Range in us Cumulative Count 00:12:22.532 3.508 - 3.532: 0.1856% ( 25) 00:12:22.532 3.532 - 3.556: 1.3960% ( 163) 00:12:22.532 3.556 - 3.579: 3.6979% ( 310) 00:12:22.532 3.579 - 3.603: 9.4899% ( 780) 00:12:22.532 3.603 - 3.627: 17.5911% ( 1091) 00:12:22.532 3.627 - 3.650: 27.2889% ( 1306) 00:12:22.532 3.650 - 3.674: 34.6774% ( 995) 00:12:22.532 3.674 - 3.698: 41.5163% ( 921) 00:12:22.532 3.698 - 3.721: 47.5607% ( 814) 00:12:22.532 3.721 - 3.745: 52.7512% ( 699) 00:12:22.532 3.745 - 3.769: 57.0654% ( 581) 00:12:22.532 3.769 - 3.793: 60.7930% ( 502) 00:12:22.532 3.793 - 3.816: 64.2237% ( 462) 00:12:22.532 3.816 - 3.840: 68.1815% ( 533) 00:12:22.532 3.840 - 3.864: 72.5551% ( 589) 00:12:22.532 3.864 - 3.887: 76.7951% ( 571) 00:12:22.532 3.887 - 3.911: 80.7604% ( 534) 00:12:22.532 3.911 - 3.935: 83.8049% ( 410) 00:12:22.532 3.935 - 3.959: 86.1142% ( 311) 00:12:22.532 3.959 - 3.982: 88.0374% ( 259) 00:12:22.532 3.982 - 4.006: 89.8418% ( 243) 00:12:22.532 4.006 - 4.030: 91.2304% ( 187) 00:12:22.532 4.030 - 4.053: 92.2032% ( 131) 00:12:22.532 4.053 - 4.077: 93.1388% ( 126) 00:12:22.532 4.077 - 4.101: 94.1115% ( 131) 00:12:22.532 4.101 - 4.124: 94.8021% ( 93) 00:12:22.532 4.124 - 4.148: 95.3442% ( 73) 00:12:22.532 4.148 - 4.172: 95.7971% ( 61) 00:12:22.532 4.172 - 4.196: 96.1164% ( 43) 00:12:22.532 4.196 - 4.219: 96.4209% ( 41) 00:12:22.532 4.219 - 4.243: 96.6362% ( 29) 00:12:22.532 4.243 - 4.267: 96.7996% ( 22) 00:12:22.532 4.267 - 4.290: 96.9407% ( 19) 00:12:22.532 4.290 - 4.314: 97.1040% ( 22) 00:12:22.532 4.314 - 4.338: 97.1857% ( 11) 00:12:22.532 4.338 - 4.361: 97.2525% ( 9) 00:12:22.532 4.361 - 4.385: 97.3416% ( 12) 00:12:22.532 4.385 - 4.409: 97.3788% ( 5) 00:12:22.532 4.409 - 4.433: 97.4233% ( 6) 00:12:22.532 4.433 - 4.456: 97.4530% ( 4) 00:12:22.532 4.456 - 4.480: 97.5124% ( 8) 00:12:22.532 4.480 - 4.504: 97.5273% ( 2) 00:12:22.532 4.504 - 4.527: 97.5421% ( 2) 00:12:22.532 4.527 - 4.551: 97.5570% ( 2) 00:12:22.532 4.551 - 4.575: 97.5644% ( 1) 00:12:22.532 4.575 - 4.599: 97.5718% ( 1) 00:12:22.532 4.599 - 4.622: 97.5793% ( 1) 00:12:22.532 4.670 - 4.693: 97.5941% ( 2) 00:12:22.532 4.693 - 4.717: 97.6164% ( 3) 00:12:22.532 4.717 - 4.741: 97.6387% ( 3) 00:12:22.532 4.741 - 4.764: 97.7055% ( 9) 00:12:22.532 4.764 - 4.788: 97.7501% ( 6) 00:12:22.532 4.788 - 4.812: 97.7723% ( 3) 00:12:22.532 4.812 - 4.836: 97.7872% ( 2) 00:12:22.532 4.836 - 4.859: 97.8392% ( 7) 00:12:22.532 4.859 - 4.883: 97.8911% ( 7) 00:12:22.532 4.883 - 4.907: 97.9357% ( 6) 00:12:22.532 4.907 - 4.930: 97.9728% ( 5) 00:12:22.532 4.930 - 4.954: 98.0100% ( 5) 00:12:22.532 4.954 - 4.978: 98.0694% ( 8) 00:12:22.532 4.978 - 5.001: 98.1139% ( 6) 00:12:22.532 5.001 - 5.025: 98.1362% ( 3) 00:12:22.532 5.025 - 5.049: 98.1956% ( 8) 00:12:22.532 5.049 - 5.073: 98.2327% ( 5) 00:12:22.532 5.073 - 5.096: 98.2773% ( 6) 00:12:22.532 5.096 - 5.120: 98.2921% ( 2) 00:12:22.532 5.120 - 5.144: 98.3292% ( 5) 00:12:22.532 5.144 - 5.167: 98.3515% ( 3) 00:12:22.532 5.167 - 5.191: 98.3590% ( 1) 00:12:22.532 5.191 - 5.215: 98.3887% ( 4) 00:12:22.532 5.215 - 5.239: 98.4035% ( 2) 00:12:22.532 5.239 - 5.262: 98.4184% ( 2) 00:12:22.532 5.262 - 5.286: 98.4258% ( 1) 00:12:22.532 5.286 - 5.310: 98.4332% ( 1) 00:12:22.532 5.310 - 5.333: 98.4406% ( 1) 00:12:22.532 5.333 - 5.357: 98.4481% ( 1) 00:12:22.532 5.357 - 5.381: 98.4555% ( 1) 00:12:22.532 5.381 - 5.404: 98.4778% ( 3) 00:12:22.532 5.404 - 5.428: 98.4852% ( 1) 00:12:22.532 5.476 - 5.499: 98.4926% ( 1) 00:12:22.532 5.499 - 5.523: 98.5000% ( 1) 00:12:22.532 5.523 - 5.547: 98.5075% ( 1) 00:12:22.532 5.618 - 5.641: 98.5149% ( 1) 00:12:22.532 5.689 - 5.713: 98.5223% ( 1) 00:12:22.532 6.163 - 6.210: 98.5372% ( 2) 00:12:22.532 6.353 - 6.400: 98.5446% ( 1) 00:12:22.532 6.400 - 6.447: 98.5520% ( 1) 00:12:22.532 6.495 - 6.542: 98.5594% ( 1) 00:12:22.532 6.732 - 6.779: 98.5743% ( 2) 00:12:22.532 6.921 - 6.969: 98.5817% ( 1) 00:12:22.532 7.159 - 7.206: 98.5891% ( 1) 00:12:22.532 7.490 - 7.538: 98.5966% ( 1) 00:12:22.532 7.585 - 7.633: 98.6040% ( 1) 00:12:22.532 7.633 - 7.680: 98.6114% ( 1) 00:12:22.532 7.822 - 7.870: 98.6263% ( 2) 00:12:22.532 7.917 - 7.964: 98.6337% ( 1) 00:12:22.532 7.964 - 8.012: 98.6485% ( 2) 00:12:22.532 8.012 - 8.059: 98.6634% ( 2) 00:12:22.532 8.154 - 8.201: 98.6783% ( 2) 00:12:22.532 8.439 - 8.486: 98.6857% ( 1) 00:12:22.532 8.486 - 8.533: 98.6931% ( 1) 00:12:22.532 8.533 - 8.581: 98.7005% ( 1) 00:12:22.532 8.581 - 8.628: 98.7080% ( 1) 00:12:22.532 8.628 - 8.676: 98.7154% ( 1) 00:12:22.532 8.676 - 8.723: 98.7228% ( 1) 00:12:22.532 8.723 - 8.770: 98.7302% ( 1) 00:12:22.532 8.770 - 8.818: 98.7377% ( 1) 00:12:22.532 8.818 - 8.865: 98.7451% ( 1) 00:12:22.532 8.913 - 8.960: 98.7599% ( 2) 00:12:22.532 8.960 - 9.007: 98.7674% ( 1) 00:12:22.532 9.055 - 9.102: 98.7748% ( 1) 00:12:22.532 9.150 - 9.197: 98.7822% ( 1) 00:12:22.532 9.197 - 9.244: 98.7971% ( 2) 00:12:22.532 9.339 - 9.387: 98.8119% ( 2) 00:12:22.532 9.529 - 9.576: 98.8193% ( 1) 00:12:22.532 10.003 - 10.050: 98.8342% ( 2) 00:12:22.532 10.145 - 10.193: 98.8416% ( 1) 00:12:22.532 10.240 - 10.287: 98.8490% ( 1) 00:12:22.532 10.430 - 10.477: 98.8565% ( 1) 00:12:22.532 10.667 - 10.714: 98.8639% ( 1) 00:12:22.532 10.714 - 10.761: 98.8713% ( 1) 00:12:22.532 10.904 - 10.951: 98.8787% ( 1) 00:12:22.532 11.283 - 11.330: 98.8862% ( 1) 00:12:22.532 12.895 - 12.990: 98.8936% ( 1) 00:12:22.532 13.274 - 13.369: 98.9010% ( 1) 00:12:22.532 13.938 - 14.033: 98.9084% ( 1) 00:12:22.532 14.127 - 14.222: 98.9233% ( 2) 00:12:22.532 14.696 - 14.791: 98.9307% ( 1) 00:12:22.532 17.067 - 17.161: 98.9381% ( 1) 00:12:22.532 17.256 - 17.351: 98.9456% ( 1) 00:12:22.532 17.351 - 17.446: 98.9678% ( 3) 00:12:22.532 17.446 - 17.541: 98.9753% ( 1) 00:12:22.532 17.541 - 17.636: 99.0198% ( 6) 00:12:22.532 17.636 - 17.730: 99.0421% ( 3) 00:12:22.532 17.730 - 17.825: 99.0867% ( 6) 00:12:22.532 17.825 - 17.920: 99.1461% ( 8) 00:12:22.532 17.920 - 18.015: 99.2055% ( 8) 00:12:22.532 18.015 - 18.110: 99.2574% ( 7) 00:12:22.532 18.110 - 18.204: 99.3317% ( 10) 00:12:22.532 18.204 - 18.299: 99.4134% ( 11) 00:12:22.532 18.299 - 18.394: 99.5173% ( 14) 00:12:22.532 18.394 - 18.489: 99.5767% ( 8) 00:12:22.532 18.489 - 18.584: 99.6436% ( 9) 00:12:22.532 18.584 - 18.679: 99.6807% ( 5) 00:12:22.532 18.679 - 18.773: 99.7178% ( 5) 00:12:22.532 18.773 - 18.868: 99.7698% ( 7) 00:12:22.532 18.868 - 18.963: 99.7995% ( 4) 00:12:22.532 18.963 - 19.058: 99.8069% ( 1) 00:12:22.532 19.058 - 19.153: 99.8441% ( 5) 00:12:22.532 19.437 - 19.532: 99.8515% ( 1) 00:12:22.532 19.911 - 20.006: 99.8589% ( 1) 00:12:22.532 20.290 - 20.385: 99.8663% ( 1) 00:12:22.532 20.480 - 20.575: 99.8738% ( 1) 00:12:22.532 21.144 - 21.239: 99.8812% ( 1) 00:12:22.532 23.419 - 23.514: 99.8886% ( 1) 00:12:22.532 3980.705 - 4004.978: 99.9703% ( 11) 00:12:22.532 4004.978 - 4029.250: 99.9926% ( 3) 00:12:22.532 5000.154 - 5024.427: 100.0000% ( 1) 00:12:22.532 00:12:22.532 Complete histogram 00:12:22.532 ================== 00:12:22.532 Range in us Cumulative Count 00:12:22.532 2.062 - 2.074: 6.3489% ( 855) 00:12:22.532 2.074 - 2.086: 38.4867% ( 4328) 00:12:22.532 2.086 - 2.098: 44.3009% ( 783) 00:12:22.533 2.098 - 2.110: 49.1052% ( 647) 00:12:22.533 2.110 - 2.121: 56.6050% ( 1010) 00:12:22.533 2.121 - 2.133: 58.3500% ( 235) 00:12:22.533 2.133 - 2.145: 64.3573% ( 809) 00:12:22.533 2.145 - 2.157: 73.6987% ( 1258) 00:12:22.533 2.157 - 2.169: 74.8571% ( 156) 00:12:22.533 2.169 - 2.181: 77.7530% ( 390) 00:12:22.533 2.181 - 2.193: 80.4559% ( 364) 00:12:22.533 2.193 - 2.204: 81.1242% ( 90) 00:12:22.533 2.204 - 2.216: 83.2999% ( 293) 00:12:22.533 2.216 - 2.228: 89.0844% ( 779) 00:12:22.533 2.228 - 2.240: 90.8814% ( 242) 00:12:22.533 2.240 - 2.252: 92.2254% ( 181) 00:12:22.533 2.252 - 2.264: 93.5917% ( 184) 00:12:22.533 2.264 - 2.276: 93.8962% ( 41) 00:12:22.533 2.276 - 2.287: 94.2229% ( 44) 00:12:22.533 2.287 - 2.299: 94.7279% ( 68) 00:12:22.533 2.299 - 2.311: 95.3516% ( 84) 00:12:22.533 2.311 - 2.323: 95.5669% ( 29) 00:12:22.533 2.323 - 2.335: 95.6635% ( 13) 00:12:22.533 2.335 - 2.347: 95.7006% ( 5) 00:12:22.533 2.347 - 2.359: 95.7526% ( 7) 00:12:22.533 2.359 - 2.370: 95.9456% ( 26) 00:12:22.533 2.370 - 2.382: 96.2352% ( 39) 00:12:22.533 2.382 - 2.394: 96.5174% ( 38) 00:12:22.533 2.394 - 2.406: 96.7402% ( 30) 00:12:22.533 2.406 - 2.418: 96.9629% ( 30) 00:12:22.533 2.418 - 2.430: 97.1412% ( 24) 00:12:22.533 2.430 - 2.441: 97.3416% ( 27) 00:12:22.533 2.441 - 2.453: 97.4976% ( 21) 00:12:22.533 2.453 - 2.465: 97.6312% ( 18) 00:12:22.533 2.465 - 2.477: 97.7649% ( 18) 00:12:22.533 2.477 - 2.489: 97.8466% ( 11) 00:12:22.533 2.489 - 2.501: 98.0025% ( 21) 00:12:22.533 2.501 - 2.513: 98.0619% ( 8) 00:12:22.533 2.513 - 2.524: 98.1139% ( 7) 00:12:22.533 2.524 - 2.536: 98.1807% ( 9) 00:12:22.533 2.536 - 2.548: 98.2030% ( 3) 00:12:22.533 2.548 - 2.560: 98.2476% ( 6) 00:12:22.533 2.560 - 2.572: 98.2773% ( 4) 00:12:22.533 2.572 - 2.584: 98.2921% ( 2) 00:12:22.533 2.596 - 2.607: 98.3070% ( 2) 00:12:22.533 2.607 - 2.619: 98.3218% ( 2) 00:12:22.533 2.619 - 2.631: 98.3367% ( 2) 00:12:22.533 2.631 - 2.643: 98.3515% ( 2) 00:12:22.533 2.643 - 2.655: 98.3590% ( 1) 00:12:22.533 2.667 - 2.679: 98.3664% ( 1) 00:12:22.533 2.702 - 2.714: 98.3738% ( 1) 00:12:22.533 2.714 - 2.726: 98.3812% ( 1) 00:12:22.533 2.726 - 2.738: 98.3887% ( 1) 00:12:22.533 2.750 - 2.761: 98.3961% ( 1) 00:12:22.533 2.761 - 2.773: 98.4035% ( 1) 00:12:22.533 2.821 - 2.833: 98.4258% ( 3) 00:12:22.533 2.927 - 2.939: 98.4332% ( 1) 00:12:22.533 2.939 - 2.951: 98.4406% ( 1) 00:12:22.533 2.975 - 2.987: 98.4481% ( 1) 00:12:22.533 3.129 - 3.153: 98.4555% ( 1) 00:12:22.533 3.342 - 3.366: 98.4703% ( 2) 00:12:22.533 3.390 - 3.413: 98.4926% ( 3) 00:12:22.533 3.413 - 3.437: 98.5075% ( 2) 00:12:22.533 3.461 - 3.484: 98.5297% ( 3) 00:12:22.533 3.484 - 3.508: 98.5372% ( 1) 00:12:22.533 3.532 - 3.556: 98.5520% ( 2) 00:12:22.533 3.556 - 3.579: 98.5594% ( 1) 00:12:22.533 3.579 - 3.603: 98.5817% ( 3) 00:12:22.533 3.627 - 3.650: 98.5966% ( 2) 00:12:22.533 3.698 - 3.721: 98.6114% ( 2) 00:12:22.533 3.721 - 3.745: 98.6337% ( 3) 00:12:22.533 3.745 - 3.769: 98.6485% ( 2) 00:12:22.533 3.769 - 3.793: 98.6708% ( 3) 00:12:22.533 3.840 - 3.864: 98.6931% ( 3) 00:12:22.533 3.864 - 3.887: 98.7005% ( 1) 00:12:22.533 3.911 - 3.935: 98.7080% ( 1) 00:12:22.533 3.959 - 3.982: 98.7154% ( 1) 00:12:22.533 4.196 - 4.219: 98.7228% ( 1) 00:12:22.533 4.361 - 4.385: 98.7302% ( 1) 00:12:22.533 5.523 - 5.547: 98.7377% ( 1) 00:12:22.533 5.547 - 5.570: 98.7451% ( 1) 00:12:22.533 6.044 - 6.068: 98.7525% ( 1) 00:12:22.533 6.163 - 6.210: 98.7599% ( 1) 00:12:22.533 6.305 - 6.353: 98.7674% ( 1) 00:12:22.533 6.447 - 6.495: 98.7822% ( 2) 00:12:22.533 6.684 - 6.732: 98.7896% ( 1) 00:12:22.533 6.969 - 7.016: 98.8045% ( 2) 00:12:22.533 7.111 - 7.159: 98.8119% ( 1) 00:12:22.533 7.159 - 7.206: 98.8268% ( 2) 00:12:22.533 7.206 - 7.253: 98.8342% ( 1) 00:12:22.533 7.301 - 7.348: 98.8416% ( 1) 00:12:22.533 7.538 - 7.585: 98.8565% ( 2) 00:12:22.533 7.585 - 7.633: 98.8639% ( 1) 00:12:22.533 7.680 - 7.727: 9[2024-07-24 20:40:17.958004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:22.533 8.8713% ( 1) 00:12:22.533 7.727 - 7.775: 98.8787% ( 1) 00:12:22.533 7.822 - 7.870: 98.8862% ( 1) 00:12:22.533 8.059 - 8.107: 98.8936% ( 1) 00:12:22.533 8.107 - 8.154: 98.9010% ( 1) 00:12:22.533 8.439 - 8.486: 98.9084% ( 1) 00:12:22.533 8.533 - 8.581: 98.9233% ( 2) 00:12:22.533 10.003 - 10.050: 98.9307% ( 1) 00:12:22.533 15.360 - 15.455: 98.9381% ( 1) 00:12:22.533 15.455 - 15.550: 98.9456% ( 1) 00:12:22.533 15.644 - 15.739: 98.9604% ( 2) 00:12:22.533 15.739 - 15.834: 98.9753% ( 2) 00:12:22.533 15.834 - 15.929: 98.9975% ( 3) 00:12:22.533 15.929 - 16.024: 99.0347% ( 5) 00:12:22.533 16.024 - 16.119: 99.0495% ( 2) 00:12:22.533 16.119 - 16.213: 99.0644% ( 2) 00:12:22.533 16.213 - 16.308: 99.1238% ( 8) 00:12:22.533 16.308 - 16.403: 99.1461% ( 3) 00:12:22.533 16.403 - 16.498: 99.1683% ( 3) 00:12:22.533 16.498 - 16.593: 99.2129% ( 6) 00:12:22.533 16.593 - 16.687: 99.2723% ( 8) 00:12:22.533 16.687 - 16.782: 99.2871% ( 2) 00:12:22.533 16.782 - 16.877: 99.3168% ( 4) 00:12:22.533 16.877 - 16.972: 99.3540% ( 5) 00:12:22.533 16.972 - 17.067: 99.3688% ( 2) 00:12:22.533 17.067 - 17.161: 99.4060% ( 5) 00:12:22.533 17.161 - 17.256: 99.4282% ( 3) 00:12:22.533 18.394 - 18.489: 99.4357% ( 1) 00:12:22.533 21.902 - 21.997: 99.4431% ( 1) 00:12:22.533 2585.031 - 2597.167: 99.4505% ( 1) 00:12:22.533 3980.705 - 4004.978: 99.8738% ( 57) 00:12:22.533 4004.978 - 4029.250: 99.9926% ( 16) 00:12:22.533 5995.330 - 6019.603: 100.0000% ( 1) 00:12:22.533 00:12:22.533 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:22.533 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:22.533 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:22.533 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:22.533 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:22.791 [ 00:12:22.791 { 00:12:22.791 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:22.791 "subtype": "Discovery", 00:12:22.791 "listen_addresses": [], 00:12:22.791 "allow_any_host": true, 00:12:22.791 "hosts": [] 00:12:22.791 }, 00:12:22.791 { 00:12:22.791 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:22.791 "subtype": "NVMe", 00:12:22.791 "listen_addresses": [ 00:12:22.791 { 00:12:22.791 "trtype": "VFIOUSER", 00:12:22.791 "adrfam": "IPv4", 00:12:22.791 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:22.791 "trsvcid": "0" 00:12:22.791 } 00:12:22.791 ], 00:12:22.791 "allow_any_host": true, 00:12:22.791 "hosts": [], 00:12:22.791 "serial_number": "SPDK1", 00:12:22.791 "model_number": "SPDK bdev Controller", 00:12:22.791 "max_namespaces": 32, 00:12:22.791 "min_cntlid": 1, 00:12:22.791 "max_cntlid": 65519, 00:12:22.791 "namespaces": [ 00:12:22.791 { 00:12:22.791 "nsid": 1, 00:12:22.791 "bdev_name": "Malloc1", 00:12:22.791 "name": "Malloc1", 00:12:22.791 "nguid": "3DD674A8CD0A4C2FA2FBC0F6FFC8DA36", 00:12:22.791 "uuid": "3dd674a8-cd0a-4c2f-a2fb-c0f6ffc8da36" 00:12:22.791 } 00:12:22.791 ] 00:12:22.791 }, 00:12:22.791 { 00:12:22.791 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:22.791 "subtype": "NVMe", 00:12:22.791 "listen_addresses": [ 00:12:22.791 { 00:12:22.791 "trtype": "VFIOUSER", 00:12:22.791 "adrfam": "IPv4", 00:12:22.791 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:22.791 "trsvcid": "0" 00:12:22.791 } 00:12:22.791 ], 00:12:22.791 "allow_any_host": true, 00:12:22.791 "hosts": [], 00:12:22.791 "serial_number": "SPDK2", 00:12:22.791 "model_number": "SPDK bdev Controller", 00:12:22.791 "max_namespaces": 32, 00:12:22.791 "min_cntlid": 1, 00:12:22.791 "max_cntlid": 65519, 00:12:22.791 "namespaces": [ 00:12:22.791 { 00:12:22.791 "nsid": 1, 00:12:22.791 "bdev_name": "Malloc2", 00:12:22.791 "name": "Malloc2", 00:12:22.791 "nguid": "6E4BB324A6C54344AF1F5623B3B23735", 00:12:22.791 "uuid": "6e4bb324-a6c5-4344-af1f-5623b3b23735" 00:12:22.791 } 00:12:22.791 ] 00:12:22.791 } 00:12:22.791 ] 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1569653 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:22.791 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:22.791 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.049 [2024-07-24 20:40:18.464781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.049 Malloc3 00:12:23.049 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:23.306 [2024-07-24 20:40:18.813321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.306 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:23.306 Asynchronous Event Request test 00:12:23.306 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:23.306 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:23.306 Registering asynchronous event callbacks... 00:12:23.306 Starting namespace attribute notice tests for all controllers... 00:12:23.306 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:23.306 aer_cb - Changed Namespace 00:12:23.306 Cleaning up... 00:12:23.564 [ 00:12:23.564 { 00:12:23.564 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.564 "subtype": "Discovery", 00:12:23.564 "listen_addresses": [], 00:12:23.564 "allow_any_host": true, 00:12:23.564 "hosts": [] 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:23.564 "subtype": "NVMe", 00:12:23.564 "listen_addresses": [ 00:12:23.564 { 00:12:23.564 "trtype": "VFIOUSER", 00:12:23.564 "adrfam": "IPv4", 00:12:23.564 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:23.564 "trsvcid": "0" 00:12:23.564 } 00:12:23.564 ], 00:12:23.564 "allow_any_host": true, 00:12:23.564 "hosts": [], 00:12:23.564 "serial_number": "SPDK1", 00:12:23.564 "model_number": "SPDK bdev Controller", 00:12:23.564 "max_namespaces": 32, 00:12:23.564 "min_cntlid": 1, 00:12:23.564 "max_cntlid": 65519, 00:12:23.564 "namespaces": [ 00:12:23.564 { 00:12:23.564 "nsid": 1, 00:12:23.564 "bdev_name": "Malloc1", 00:12:23.564 "name": "Malloc1", 00:12:23.564 "nguid": "3DD674A8CD0A4C2FA2FBC0F6FFC8DA36", 00:12:23.564 "uuid": "3dd674a8-cd0a-4c2f-a2fb-c0f6ffc8da36" 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "nsid": 2, 00:12:23.564 "bdev_name": "Malloc3", 00:12:23.564 "name": "Malloc3", 00:12:23.564 "nguid": "F00B106F78784DB6A4F60F5CEA7E12C1", 00:12:23.564 "uuid": "f00b106f-7878-4db6-a4f6-0f5cea7e12c1" 00:12:23.564 } 00:12:23.564 ] 00:12:23.564 }, 00:12:23.564 { 00:12:23.564 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:23.564 "subtype": "NVMe", 00:12:23.564 "listen_addresses": [ 00:12:23.564 { 00:12:23.564 "trtype": "VFIOUSER", 00:12:23.564 "adrfam": "IPv4", 00:12:23.564 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:23.564 "trsvcid": "0" 00:12:23.564 } 00:12:23.564 ], 00:12:23.564 "allow_any_host": true, 00:12:23.564 "hosts": [], 00:12:23.564 "serial_number": "SPDK2", 00:12:23.564 "model_number": "SPDK bdev Controller", 00:12:23.564 "max_namespaces": 32, 00:12:23.564 "min_cntlid": 1, 00:12:23.564 "max_cntlid": 65519, 00:12:23.564 "namespaces": [ 00:12:23.564 { 00:12:23.564 "nsid": 1, 00:12:23.564 "bdev_name": "Malloc2", 00:12:23.564 "name": "Malloc2", 00:12:23.564 "nguid": "6E4BB324A6C54344AF1F5623B3B23735", 00:12:23.564 "uuid": "6e4bb324-a6c5-4344-af1f-5623b3b23735" 00:12:23.564 } 00:12:23.564 ] 00:12:23.564 } 00:12:23.564 ] 00:12:23.564 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1569653 00:12:23.564 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.564 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:23.564 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:23.565 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:23.565 [2024-07-24 20:40:19.098600] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:12:23.565 [2024-07-24 20:40:19.098651] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569788 ] 00:12:23.565 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.823 [2024-07-24 20:40:19.132471] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:23.824 [2024-07-24 20:40:19.138547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:23.824 [2024-07-24 20:40:19.138577] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4992e97000 00:12:23.824 [2024-07-24 20:40:19.139536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.140545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.141548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.142569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.143573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.144574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.145588] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.146606] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.824 [2024-07-24 20:40:19.147632] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:23.824 [2024-07-24 20:40:19.147653] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4992e8c000 00:12:23.824 [2024-07-24 20:40:19.148777] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:23.824 [2024-07-24 20:40:19.162897] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:23.824 [2024-07-24 20:40:19.162930] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:23.824 [2024-07-24 20:40:19.168052] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:23.824 [2024-07-24 20:40:19.168103] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:23.824 [2024-07-24 20:40:19.168190] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:23.824 [2024-07-24 20:40:19.168212] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:23.824 [2024-07-24 20:40:19.168247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:23.824 [2024-07-24 20:40:19.169062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:23.824 [2024-07-24 20:40:19.169088] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:23.824 [2024-07-24 20:40:19.169102] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:23.824 [2024-07-24 20:40:19.170066] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:23.824 [2024-07-24 20:40:19.170086] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:23.824 [2024-07-24 20:40:19.170100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:23.824 [2024-07-24 20:40:19.171078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:23.824 [2024-07-24 20:40:19.171099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:23.824 [2024-07-24 20:40:19.172087] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:23.824 [2024-07-24 20:40:19.172107] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:23.824 [2024-07-24 20:40:19.172116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:23.824 [2024-07-24 20:40:19.172128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:23.824 [2024-07-24 20:40:19.172237] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:23.824 [2024-07-24 20:40:19.172267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:23.824 [2024-07-24 20:40:19.172276] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:23.824 [2024-07-24 20:40:19.173096] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:23.824 [2024-07-24 20:40:19.174099] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:23.824 [2024-07-24 20:40:19.175111] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:23.824 [2024-07-24 20:40:19.176107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.824 [2024-07-24 20:40:19.176188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:23.824 [2024-07-24 20:40:19.177125] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:23.824 [2024-07-24 20:40:19.177145] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:23.824 [2024-07-24 20:40:19.177155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.177182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:23.824 [2024-07-24 20:40:19.177196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.177215] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:23.824 [2024-07-24 20:40:19.177240] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.824 [2024-07-24 20:40:19.177254] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.824 [2024-07-24 20:40:19.177272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.824 [2024-07-24 20:40:19.185259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:23.824 [2024-07-24 20:40:19.185281] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:23.824 [2024-07-24 20:40:19.185291] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:23.824 [2024-07-24 20:40:19.185299] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:23.824 [2024-07-24 20:40:19.185307] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:23.824 [2024-07-24 20:40:19.185315] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:23.824 [2024-07-24 20:40:19.185323] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:23.824 [2024-07-24 20:40:19.185331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.185344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.185366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:23.824 [2024-07-24 20:40:19.193256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:23.824 [2024-07-24 20:40:19.193285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.824 [2024-07-24 20:40:19.193300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.824 [2024-07-24 20:40:19.193313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.824 [2024-07-24 20:40:19.193326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.824 [2024-07-24 20:40:19.193335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.193351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.193367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:23.824 [2024-07-24 20:40:19.201255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:23.824 [2024-07-24 20:40:19.201273] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:23.824 [2024-07-24 20:40:19.201287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.201302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.201313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:23.824 [2024-07-24 20:40:19.201327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:23.824 [2024-07-24 20:40:19.209254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:23.824 [2024-07-24 20:40:19.209330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.209347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.209361] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:23.825 [2024-07-24 20:40:19.209370] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:23.825 [2024-07-24 20:40:19.209377] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.825 [2024-07-24 20:40:19.209387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.217255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.217280] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:23.825 [2024-07-24 20:40:19.217300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.217314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.217328] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:23.825 [2024-07-24 20:40:19.217336] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.825 [2024-07-24 20:40:19.217343] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.825 [2024-07-24 20:40:19.217353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.225255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.225284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.225300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.225314] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:23.825 [2024-07-24 20:40:19.225323] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.825 [2024-07-24 20:40:19.225329] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.825 [2024-07-24 20:40:19.225339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.233253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.233274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.233287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.233304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.233317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.233326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.233335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.233344] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:23.825 [2024-07-24 20:40:19.233351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:23.825 [2024-07-24 20:40:19.233360] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:23.825 [2024-07-24 20:40:19.233384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.241269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.241295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.249255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.249281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.257253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.257280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.265254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.265285] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:23.825 [2024-07-24 20:40:19.265297] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:23.825 [2024-07-24 20:40:19.265304] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:23.825 [2024-07-24 20:40:19.265310] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:23.825 [2024-07-24 20:40:19.265316] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:23.825 [2024-07-24 20:40:19.265326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:23.825 [2024-07-24 20:40:19.265339] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:23.825 [2024-07-24 20:40:19.265347] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:23.825 [2024-07-24 20:40:19.265353] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.825 [2024-07-24 20:40:19.265366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.265379] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:23.825 [2024-07-24 20:40:19.265387] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.825 [2024-07-24 20:40:19.265393] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.825 [2024-07-24 20:40:19.265402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.265415] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:23.825 [2024-07-24 20:40:19.265423] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:23.825 [2024-07-24 20:40:19.265429] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:23.825 [2024-07-24 20:40:19.265438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:23.825 [2024-07-24 20:40:19.273256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.273302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:23.825 [2024-07-24 20:40:19.273314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:23.825 ===================================================== 00:12:23.825 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:23.825 ===================================================== 00:12:23.825 Controller Capabilities/Features 00:12:23.825 ================================ 00:12:23.825 Vendor ID: 4e58 00:12:23.825 Subsystem Vendor ID: 4e58 00:12:23.825 Serial Number: SPDK2 00:12:23.825 Model Number: SPDK bdev Controller 00:12:23.825 Firmware Version: 24.09 00:12:23.825 Recommended Arb Burst: 6 00:12:23.825 IEEE OUI Identifier: 8d 6b 50 00:12:23.825 Multi-path I/O 00:12:23.825 May have multiple subsystem ports: Yes 00:12:23.825 May have multiple controllers: Yes 00:12:23.825 Associated with SR-IOV VF: No 00:12:23.825 Max Data Transfer Size: 131072 00:12:23.825 Max Number of Namespaces: 32 00:12:23.825 Max Number of I/O Queues: 127 00:12:23.825 NVMe Specification Version (VS): 1.3 00:12:23.825 NVMe Specification Version (Identify): 1.3 00:12:23.825 Maximum Queue Entries: 256 00:12:23.825 Contiguous Queues Required: Yes 00:12:23.825 Arbitration Mechanisms Supported 00:12:23.825 Weighted Round Robin: Not Supported 00:12:23.825 Vendor Specific: Not Supported 00:12:23.825 Reset Timeout: 15000 ms 00:12:23.825 Doorbell Stride: 4 bytes 00:12:23.825 NVM Subsystem Reset: Not Supported 00:12:23.825 Command Sets Supported 00:12:23.825 NVM Command Set: Supported 00:12:23.825 Boot Partition: Not Supported 00:12:23.825 Memory Page Size Minimum: 4096 bytes 00:12:23.825 Memory Page Size Maximum: 4096 bytes 00:12:23.825 Persistent Memory Region: Not Supported 00:12:23.825 Optional Asynchronous Events Supported 00:12:23.825 Namespace Attribute Notices: Supported 00:12:23.825 Firmware Activation Notices: Not Supported 00:12:23.825 ANA Change Notices: Not Supported 00:12:23.825 PLE Aggregate Log Change Notices: Not Supported 00:12:23.825 LBA Status Info Alert Notices: Not Supported 00:12:23.825 EGE Aggregate Log Change Notices: Not Supported 00:12:23.825 Normal NVM Subsystem Shutdown event: Not Supported 00:12:23.825 Zone Descriptor Change Notices: Not Supported 00:12:23.825 Discovery Log Change Notices: Not Supported 00:12:23.825 Controller Attributes 00:12:23.825 128-bit Host Identifier: Supported 00:12:23.825 Non-Operational Permissive Mode: Not Supported 00:12:23.825 NVM Sets: Not Supported 00:12:23.825 Read Recovery Levels: Not Supported 00:12:23.825 Endurance Groups: Not Supported 00:12:23.826 Predictable Latency Mode: Not Supported 00:12:23.826 Traffic Based Keep ALive: Not Supported 00:12:23.826 Namespace Granularity: Not Supported 00:12:23.826 SQ Associations: Not Supported 00:12:23.826 UUID List: Not Supported 00:12:23.826 Multi-Domain Subsystem: Not Supported 00:12:23.826 Fixed Capacity Management: Not Supported 00:12:23.826 Variable Capacity Management: Not Supported 00:12:23.826 Delete Endurance Group: Not Supported 00:12:23.826 Delete NVM Set: Not Supported 00:12:23.826 Extended LBA Formats Supported: Not Supported 00:12:23.826 Flexible Data Placement Supported: Not Supported 00:12:23.826 00:12:23.826 Controller Memory Buffer Support 00:12:23.826 ================================ 00:12:23.826 Supported: No 00:12:23.826 00:12:23.826 Persistent Memory Region Support 00:12:23.826 ================================ 00:12:23.826 Supported: No 00:12:23.826 00:12:23.826 Admin Command Set Attributes 00:12:23.826 ============================ 00:12:23.826 Security Send/Receive: Not Supported 00:12:23.826 Format NVM: Not Supported 00:12:23.826 Firmware Activate/Download: Not Supported 00:12:23.826 Namespace Management: Not Supported 00:12:23.826 Device Self-Test: Not Supported 00:12:23.826 Directives: Not Supported 00:12:23.826 NVMe-MI: Not Supported 00:12:23.826 Virtualization Management: Not Supported 00:12:23.826 Doorbell Buffer Config: Not Supported 00:12:23.826 Get LBA Status Capability: Not Supported 00:12:23.826 Command & Feature Lockdown Capability: Not Supported 00:12:23.826 Abort Command Limit: 4 00:12:23.826 Async Event Request Limit: 4 00:12:23.826 Number of Firmware Slots: N/A 00:12:23.826 Firmware Slot 1 Read-Only: N/A 00:12:23.826 Firmware Activation Without Reset: N/A 00:12:23.826 Multiple Update Detection Support: N/A 00:12:23.826 Firmware Update Granularity: No Information Provided 00:12:23.826 Per-Namespace SMART Log: No 00:12:23.826 Asymmetric Namespace Access Log Page: Not Supported 00:12:23.826 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:23.826 Command Effects Log Page: Supported 00:12:23.826 Get Log Page Extended Data: Supported 00:12:23.826 Telemetry Log Pages: Not Supported 00:12:23.826 Persistent Event Log Pages: Not Supported 00:12:23.826 Supported Log Pages Log Page: May Support 00:12:23.826 Commands Supported & Effects Log Page: Not Supported 00:12:23.826 Feature Identifiers & Effects Log Page:May Support 00:12:23.826 NVMe-MI Commands & Effects Log Page: May Support 00:12:23.826 Data Area 4 for Telemetry Log: Not Supported 00:12:23.826 Error Log Page Entries Supported: 128 00:12:23.826 Keep Alive: Supported 00:12:23.826 Keep Alive Granularity: 10000 ms 00:12:23.826 00:12:23.826 NVM Command Set Attributes 00:12:23.826 ========================== 00:12:23.826 Submission Queue Entry Size 00:12:23.826 Max: 64 00:12:23.826 Min: 64 00:12:23.826 Completion Queue Entry Size 00:12:23.826 Max: 16 00:12:23.826 Min: 16 00:12:23.826 Number of Namespaces: 32 00:12:23.826 Compare Command: Supported 00:12:23.826 Write Uncorrectable Command: Not Supported 00:12:23.826 Dataset Management Command: Supported 00:12:23.826 Write Zeroes Command: Supported 00:12:23.826 Set Features Save Field: Not Supported 00:12:23.826 Reservations: Not Supported 00:12:23.826 Timestamp: Not Supported 00:12:23.826 Copy: Supported 00:12:23.826 Volatile Write Cache: Present 00:12:23.826 Atomic Write Unit (Normal): 1 00:12:23.826 Atomic Write Unit (PFail): 1 00:12:23.826 Atomic Compare & Write Unit: 1 00:12:23.826 Fused Compare & Write: Supported 00:12:23.826 Scatter-Gather List 00:12:23.826 SGL Command Set: Supported (Dword aligned) 00:12:23.826 SGL Keyed: Not Supported 00:12:23.826 SGL Bit Bucket Descriptor: Not Supported 00:12:23.826 SGL Metadata Pointer: Not Supported 00:12:23.826 Oversized SGL: Not Supported 00:12:23.826 SGL Metadata Address: Not Supported 00:12:23.826 SGL Offset: Not Supported 00:12:23.826 Transport SGL Data Block: Not Supported 00:12:23.826 Replay Protected Memory Block: Not Supported 00:12:23.826 00:12:23.826 Firmware Slot Information 00:12:23.826 ========================= 00:12:23.826 Active slot: 1 00:12:23.826 Slot 1 Firmware Revision: 24.09 00:12:23.826 00:12:23.826 00:12:23.826 Commands Supported and Effects 00:12:23.826 ============================== 00:12:23.826 Admin Commands 00:12:23.826 -------------- 00:12:23.826 Get Log Page (02h): Supported 00:12:23.826 Identify (06h): Supported 00:12:23.826 Abort (08h): Supported 00:12:23.826 Set Features (09h): Supported 00:12:23.826 Get Features (0Ah): Supported 00:12:23.826 Asynchronous Event Request (0Ch): Supported 00:12:23.826 Keep Alive (18h): Supported 00:12:23.826 I/O Commands 00:12:23.826 ------------ 00:12:23.826 Flush (00h): Supported LBA-Change 00:12:23.826 Write (01h): Supported LBA-Change 00:12:23.826 Read (02h): Supported 00:12:23.826 Compare (05h): Supported 00:12:23.826 Write Zeroes (08h): Supported LBA-Change 00:12:23.826 Dataset Management (09h): Supported LBA-Change 00:12:23.826 Copy (19h): Supported LBA-Change 00:12:23.826 00:12:23.826 Error Log 00:12:23.826 ========= 00:12:23.826 00:12:23.826 Arbitration 00:12:23.826 =========== 00:12:23.826 Arbitration Burst: 1 00:12:23.826 00:12:23.826 Power Management 00:12:23.826 ================ 00:12:23.826 Number of Power States: 1 00:12:23.826 Current Power State: Power State #0 00:12:23.826 Power State #0: 00:12:23.826 Max Power: 0.00 W 00:12:23.826 Non-Operational State: Operational 00:12:23.826 Entry Latency: Not Reported 00:12:23.826 Exit Latency: Not Reported 00:12:23.826 Relative Read Throughput: 0 00:12:23.826 Relative Read Latency: 0 00:12:23.826 Relative Write Throughput: 0 00:12:23.826 Relative Write Latency: 0 00:12:23.826 Idle Power: Not Reported 00:12:23.826 Active Power: Not Reported 00:12:23.826 Non-Operational Permissive Mode: Not Supported 00:12:23.826 00:12:23.826 Health Information 00:12:23.826 ================== 00:12:23.826 Critical Warnings: 00:12:23.826 Available Spare Space: OK 00:12:23.826 Temperature: OK 00:12:23.826 Device Reliability: OK 00:12:23.826 Read Only: No 00:12:23.826 Volatile Memory Backup: OK 00:12:23.826 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:23.826 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:23.826 Available Spare: 0% 00:12:23.826 Available Sp[2024-07-24 20:40:19.273433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:23.826 [2024-07-24 20:40:19.281252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:23.826 [2024-07-24 20:40:19.281303] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:23.826 [2024-07-24 20:40:19.281321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.826 [2024-07-24 20:40:19.281332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.826 [2024-07-24 20:40:19.281342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.826 [2024-07-24 20:40:19.281352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.826 [2024-07-24 20:40:19.281420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:23.826 [2024-07-24 20:40:19.281440] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:23.826 [2024-07-24 20:40:19.282423] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.826 [2024-07-24 20:40:19.282508] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:23.826 [2024-07-24 20:40:19.282538] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:23.826 [2024-07-24 20:40:19.283433] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:23.826 [2024-07-24 20:40:19.283459] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:23.826 [2024-07-24 20:40:19.283517] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:23.826 [2024-07-24 20:40:19.284761] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:23.826 are Threshold: 0% 00:12:23.826 Life Percentage Used: 0% 00:12:23.826 Data Units Read: 0 00:12:23.826 Data Units Written: 0 00:12:23.826 Host Read Commands: 0 00:12:23.826 Host Write Commands: 0 00:12:23.826 Controller Busy Time: 0 minutes 00:12:23.826 Power Cycles: 0 00:12:23.826 Power On Hours: 0 hours 00:12:23.826 Unsafe Shutdowns: 0 00:12:23.826 Unrecoverable Media Errors: 0 00:12:23.826 Lifetime Error Log Entries: 0 00:12:23.826 Warning Temperature Time: 0 minutes 00:12:23.826 Critical Temperature Time: 0 minutes 00:12:23.826 00:12:23.826 Number of Queues 00:12:23.826 ================ 00:12:23.827 Number of I/O Submission Queues: 127 00:12:23.827 Number of I/O Completion Queues: 127 00:12:23.827 00:12:23.827 Active Namespaces 00:12:23.827 ================= 00:12:23.827 Namespace ID:1 00:12:23.827 Error Recovery Timeout: Unlimited 00:12:23.827 Command Set Identifier: NVM (00h) 00:12:23.827 Deallocate: Supported 00:12:23.827 Deallocated/Unwritten Error: Not Supported 00:12:23.827 Deallocated Read Value: Unknown 00:12:23.827 Deallocate in Write Zeroes: Not Supported 00:12:23.827 Deallocated Guard Field: 0xFFFF 00:12:23.827 Flush: Supported 00:12:23.827 Reservation: Supported 00:12:23.827 Namespace Sharing Capabilities: Multiple Controllers 00:12:23.827 Size (in LBAs): 131072 (0GiB) 00:12:23.827 Capacity (in LBAs): 131072 (0GiB) 00:12:23.827 Utilization (in LBAs): 131072 (0GiB) 00:12:23.827 NGUID: 6E4BB324A6C54344AF1F5623B3B23735 00:12:23.827 UUID: 6e4bb324-a6c5-4344-af1f-5623b3b23735 00:12:23.827 Thin Provisioning: Not Supported 00:12:23.827 Per-NS Atomic Units: Yes 00:12:23.827 Atomic Boundary Size (Normal): 0 00:12:23.827 Atomic Boundary Size (PFail): 0 00:12:23.827 Atomic Boundary Offset: 0 00:12:23.827 Maximum Single Source Range Length: 65535 00:12:23.827 Maximum Copy Length: 65535 00:12:23.827 Maximum Source Range Count: 1 00:12:23.827 NGUID/EUI64 Never Reused: No 00:12:23.827 Namespace Write Protected: No 00:12:23.827 Number of LBA Formats: 1 00:12:23.827 Current LBA Format: LBA Format #00 00:12:23.827 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:23.827 00:12:23.827 20:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:23.827 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.084 [2024-07-24 20:40:19.514171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.344 Initializing NVMe Controllers 00:12:29.344 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:29.344 Initialization complete. Launching workers. 00:12:29.344 ======================================================== 00:12:29.344 Latency(us) 00:12:29.344 Device Information : IOPS MiB/s Average min max 00:12:29.344 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33945.20 132.60 3770.11 1181.55 7381.63 00:12:29.344 ======================================================== 00:12:29.344 Total : 33945.20 132.60 3770.11 1181.55 7381.63 00:12:29.344 00:12:29.344 [2024-07-24 20:40:24.621604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.344 20:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:29.344 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.344 [2024-07-24 20:40:24.852303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:34.638 Initializing NVMe Controllers 00:12:34.638 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.638 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:34.638 Initialization complete. Launching workers. 00:12:34.638 ======================================================== 00:12:34.638 Latency(us) 00:12:34.638 Device Information : IOPS MiB/s Average min max 00:12:34.638 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30720.22 120.00 4165.72 1207.61 10326.72 00:12:34.638 ======================================================== 00:12:34.638 Total : 30720.22 120.00 4165.72 1207.61 10326.72 00:12:34.638 00:12:34.638 [2024-07-24 20:40:29.873757] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:34.638 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:34.638 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.638 [2024-07-24 20:40:30.088089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:39.913 [2024-07-24 20:40:35.237402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:39.913 Initializing NVMe Controllers 00:12:39.913 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:39.913 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:39.913 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:39.913 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:39.913 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:39.913 Initialization complete. Launching workers. 00:12:39.913 Starting thread on core 2 00:12:39.913 Starting thread on core 3 00:12:39.913 Starting thread on core 1 00:12:39.913 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:39.913 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.171 [2024-07-24 20:40:35.547686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.450 [2024-07-24 20:40:38.618480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.450 Initializing NVMe Controllers 00:12:43.450 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.450 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:43.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:43.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:43.450 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:43.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:43.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:43.450 Initialization complete. Launching workers. 00:12:43.450 Starting thread on core 1 with urgent priority queue 00:12:43.450 Starting thread on core 2 with urgent priority queue 00:12:43.450 Starting thread on core 3 with urgent priority queue 00:12:43.450 Starting thread on core 0 with urgent priority queue 00:12:43.450 SPDK bdev Controller (SPDK2 ) core 0: 4610.33 IO/s 21.69 secs/100000 ios 00:12:43.450 SPDK bdev Controller (SPDK2 ) core 1: 5496.67 IO/s 18.19 secs/100000 ios 00:12:43.450 SPDK bdev Controller (SPDK2 ) core 2: 5473.33 IO/s 18.27 secs/100000 ios 00:12:43.450 SPDK bdev Controller (SPDK2 ) core 3: 5888.00 IO/s 16.98 secs/100000 ios 00:12:43.450 ======================================================== 00:12:43.450 00:12:43.450 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:43.450 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.450 [2024-07-24 20:40:38.922774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.450 Initializing NVMe Controllers 00:12:43.450 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.450 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.450 Namespace ID: 1 size: 0GB 00:12:43.450 Initialization complete. 00:12:43.450 INFO: using host memory buffer for IO 00:12:43.450 Hello world! 00:12:43.450 [2024-07-24 20:40:38.934937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.450 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:43.708 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.708 [2024-07-24 20:40:39.227080] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.087 Initializing NVMe Controllers 00:12:45.087 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.087 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.087 Initialization complete. Launching workers. 00:12:45.087 submit (in ns) avg, min, max = 8462.9, 3530.0, 4016801.1 00:12:45.087 complete (in ns) avg, min, max = 26230.6, 2073.3, 6004525.6 00:12:45.087 00:12:45.087 Submit histogram 00:12:45.087 ================ 00:12:45.087 Range in us Cumulative Count 00:12:45.087 3.508 - 3.532: 0.0074% ( 1) 00:12:45.088 3.532 - 3.556: 0.1260% ( 16) 00:12:45.088 3.556 - 3.579: 0.6522% ( 71) 00:12:45.088 3.579 - 3.603: 2.2160% ( 211) 00:12:45.088 3.603 - 3.627: 6.1439% ( 530) 00:12:45.088 3.627 - 3.650: 11.7617% ( 758) 00:12:45.088 3.650 - 3.674: 19.5731% ( 1054) 00:12:45.088 3.674 - 3.698: 28.7853% ( 1243) 00:12:45.088 3.698 - 3.721: 37.4194% ( 1165) 00:12:45.088 3.721 - 3.745: 44.2822% ( 926) 00:12:45.088 3.745 - 3.769: 49.4923% ( 703) 00:12:45.088 3.769 - 3.793: 54.4579% ( 670) 00:12:45.088 3.793 - 3.816: 58.3117% ( 520) 00:12:45.088 3.816 - 3.840: 62.1582% ( 519) 00:12:45.088 3.840 - 3.864: 65.8638% ( 500) 00:12:45.088 3.864 - 3.887: 69.7102% ( 519) 00:12:45.088 3.887 - 3.911: 73.8976% ( 565) 00:12:45.088 3.911 - 3.935: 78.1590% ( 575) 00:12:45.088 3.935 - 3.959: 81.9610% ( 513) 00:12:45.088 3.959 - 3.982: 84.5994% ( 356) 00:12:45.088 3.982 - 4.006: 86.6968% ( 283) 00:12:45.088 4.006 - 4.030: 88.2976% ( 216) 00:12:45.088 4.030 - 4.053: 89.6761% ( 186) 00:12:45.088 4.053 - 4.077: 91.1139% ( 194) 00:12:45.088 4.077 - 4.101: 92.1144% ( 135) 00:12:45.088 4.101 - 4.124: 93.1668% ( 142) 00:12:45.088 4.124 - 4.148: 94.0710% ( 122) 00:12:45.088 4.148 - 4.172: 94.7380% ( 90) 00:12:45.088 4.172 - 4.196: 95.2420% ( 68) 00:12:45.088 4.196 - 4.219: 95.6718% ( 58) 00:12:45.088 4.219 - 4.243: 95.8942% ( 30) 00:12:45.088 4.243 - 4.267: 96.1091% ( 29) 00:12:45.088 4.267 - 4.290: 96.3092% ( 27) 00:12:45.088 4.290 - 4.314: 96.4797% ( 23) 00:12:45.088 4.314 - 4.338: 96.5464% ( 9) 00:12:45.088 4.338 - 4.361: 96.6501% ( 14) 00:12:45.088 4.361 - 4.385: 96.7316% ( 11) 00:12:45.088 4.385 - 4.409: 96.8206% ( 12) 00:12:45.088 4.409 - 4.433: 96.8650% ( 6) 00:12:45.088 4.433 - 4.456: 96.9466% ( 11) 00:12:45.088 4.456 - 4.480: 96.9836% ( 5) 00:12:45.088 4.480 - 4.504: 97.0207% ( 5) 00:12:45.088 4.504 - 4.527: 97.0355% ( 2) 00:12:45.088 4.527 - 4.551: 97.0577% ( 3) 00:12:45.088 4.551 - 4.575: 97.0651% ( 1) 00:12:45.088 4.575 - 4.599: 97.0726% ( 1) 00:12:45.088 4.599 - 4.622: 97.0948% ( 3) 00:12:45.088 4.622 - 4.646: 97.1096% ( 2) 00:12:45.088 4.646 - 4.670: 97.1244% ( 2) 00:12:45.088 4.670 - 4.693: 97.1318% ( 1) 00:12:45.088 4.693 - 4.717: 97.1393% ( 1) 00:12:45.088 4.717 - 4.741: 97.1467% ( 1) 00:12:45.088 4.741 - 4.764: 97.1615% ( 2) 00:12:45.088 4.812 - 4.836: 97.1689% ( 1) 00:12:45.088 4.836 - 4.859: 97.1763% ( 1) 00:12:45.088 4.859 - 4.883: 97.2134% ( 5) 00:12:45.088 4.883 - 4.907: 97.2578% ( 6) 00:12:45.088 4.907 - 4.930: 97.2652% ( 1) 00:12:45.088 4.930 - 4.954: 97.2875% ( 3) 00:12:45.088 4.954 - 4.978: 97.3319% ( 6) 00:12:45.088 4.978 - 5.001: 97.3987% ( 9) 00:12:45.088 5.001 - 5.025: 97.4654% ( 9) 00:12:45.088 5.025 - 5.049: 97.4876% ( 3) 00:12:45.088 5.049 - 5.073: 97.5246% ( 5) 00:12:45.088 5.073 - 5.096: 97.5839% ( 8) 00:12:45.088 5.096 - 5.120: 97.6432% ( 8) 00:12:45.088 5.120 - 5.144: 97.6580% ( 2) 00:12:45.088 5.144 - 5.167: 97.6877% ( 4) 00:12:45.088 5.167 - 5.191: 97.7618% ( 10) 00:12:45.088 5.191 - 5.215: 97.8211% ( 8) 00:12:45.088 5.215 - 5.239: 97.8359% ( 2) 00:12:45.088 5.239 - 5.262: 97.8656% ( 4) 00:12:45.088 5.262 - 5.286: 97.8730% ( 1) 00:12:45.088 5.286 - 5.310: 97.8804% ( 1) 00:12:45.088 5.310 - 5.333: 97.9026% ( 3) 00:12:45.088 5.333 - 5.357: 97.9174% ( 2) 00:12:45.088 5.357 - 5.381: 97.9323% ( 2) 00:12:45.088 5.381 - 5.404: 97.9471% ( 2) 00:12:45.088 5.428 - 5.452: 97.9545% ( 1) 00:12:45.088 5.452 - 5.476: 97.9767% ( 3) 00:12:45.088 5.476 - 5.499: 97.9841% ( 1) 00:12:45.088 5.523 - 5.547: 97.9916% ( 1) 00:12:45.088 5.547 - 5.570: 97.9990% ( 1) 00:12:45.088 5.570 - 5.594: 98.0286% ( 4) 00:12:45.088 5.594 - 5.618: 98.0360% ( 1) 00:12:45.088 5.618 - 5.641: 98.0434% ( 1) 00:12:45.088 5.665 - 5.689: 98.0508% ( 1) 00:12:45.088 5.713 - 5.736: 98.0583% ( 1) 00:12:45.088 5.807 - 5.831: 98.0657% ( 1) 00:12:45.088 6.044 - 6.068: 98.0731% ( 1) 00:12:45.088 6.068 - 6.116: 98.0879% ( 2) 00:12:45.088 6.116 - 6.163: 98.1027% ( 2) 00:12:45.088 6.258 - 6.305: 98.1101% ( 1) 00:12:45.088 6.590 - 6.637: 98.1175% ( 1) 00:12:45.088 6.637 - 6.684: 98.1250% ( 1) 00:12:45.088 6.779 - 6.827: 98.1324% ( 1) 00:12:45.088 7.016 - 7.064: 98.1398% ( 1) 00:12:45.088 7.111 - 7.159: 98.1472% ( 1) 00:12:45.088 7.159 - 7.206: 98.1620% ( 2) 00:12:45.088 7.206 - 7.253: 98.1694% ( 1) 00:12:45.088 7.443 - 7.490: 98.1768% ( 1) 00:12:45.088 7.585 - 7.633: 98.1842% ( 1) 00:12:45.088 7.633 - 7.680: 98.1917% ( 1) 00:12:45.088 7.727 - 7.775: 98.2065% ( 2) 00:12:45.088 7.775 - 7.822: 98.2139% ( 1) 00:12:45.088 7.917 - 7.964: 98.2287% ( 2) 00:12:45.088 7.964 - 8.012: 98.2435% ( 2) 00:12:45.088 8.012 - 8.059: 98.2509% ( 1) 00:12:45.088 8.107 - 8.154: 98.2658% ( 2) 00:12:45.088 8.154 - 8.201: 98.2806% ( 2) 00:12:45.088 8.201 - 8.249: 98.2880% ( 1) 00:12:45.088 8.249 - 8.296: 98.2954% ( 1) 00:12:45.088 8.296 - 8.344: 98.3028% ( 1) 00:12:45.088 8.344 - 8.391: 98.3102% ( 1) 00:12:45.088 8.391 - 8.439: 98.3251% ( 2) 00:12:45.088 8.439 - 8.486: 98.3399% ( 2) 00:12:45.088 8.486 - 8.533: 98.3473% ( 1) 00:12:45.088 8.533 - 8.581: 98.3547% ( 1) 00:12:45.088 8.581 - 8.628: 98.3695% ( 2) 00:12:45.088 8.628 - 8.676: 98.3843% ( 2) 00:12:45.088 8.770 - 8.818: 98.3918% ( 1) 00:12:45.088 8.865 - 8.913: 98.3992% ( 1) 00:12:45.088 9.007 - 9.055: 98.4140% ( 2) 00:12:45.088 9.055 - 9.102: 98.4362% ( 3) 00:12:45.088 9.102 - 9.150: 98.4510% ( 2) 00:12:45.088 9.150 - 9.197: 98.4585% ( 1) 00:12:45.088 9.197 - 9.244: 98.4807% ( 3) 00:12:45.088 9.244 - 9.292: 98.5103% ( 4) 00:12:45.088 9.292 - 9.339: 98.5177% ( 1) 00:12:45.088 9.339 - 9.387: 98.5326% ( 2) 00:12:45.088 9.434 - 9.481: 98.5400% ( 1) 00:12:45.088 9.529 - 9.576: 98.5474% ( 1) 00:12:45.088 9.671 - 9.719: 98.5548% ( 1) 00:12:45.088 9.813 - 9.861: 98.5622% ( 1) 00:12:45.088 9.956 - 10.003: 98.5696% ( 1) 00:12:45.088 10.003 - 10.050: 98.5845% ( 2) 00:12:45.088 10.193 - 10.240: 98.5919% ( 1) 00:12:45.088 10.240 - 10.287: 98.6067% ( 2) 00:12:45.088 10.335 - 10.382: 98.6141% ( 1) 00:12:45.088 10.430 - 10.477: 98.6215% ( 1) 00:12:45.088 10.524 - 10.572: 98.6363% ( 2) 00:12:45.088 10.619 - 10.667: 98.6437% ( 1) 00:12:45.088 10.714 - 10.761: 98.6512% ( 1) 00:12:45.088 10.904 - 10.951: 98.6734% ( 3) 00:12:45.088 10.999 - 11.046: 98.6808% ( 1) 00:12:45.088 11.093 - 11.141: 98.6882% ( 1) 00:12:45.088 11.188 - 11.236: 98.6956% ( 1) 00:12:45.088 11.615 - 11.662: 98.7030% ( 1) 00:12:45.088 11.662 - 11.710: 98.7104% ( 1) 00:12:45.088 11.804 - 11.852: 98.7179% ( 1) 00:12:45.088 11.899 - 11.947: 98.7253% ( 1) 00:12:45.088 11.947 - 11.994: 98.7401% ( 2) 00:12:45.088 11.994 - 12.041: 98.7475% ( 1) 00:12:45.088 12.089 - 12.136: 98.7549% ( 1) 00:12:45.088 12.136 - 12.231: 98.7623% ( 1) 00:12:45.088 12.326 - 12.421: 98.7697% ( 1) 00:12:45.088 12.610 - 12.705: 98.7771% ( 1) 00:12:45.088 12.800 - 12.895: 98.7846% ( 1) 00:12:45.088 13.084 - 13.179: 98.7920% ( 1) 00:12:45.088 13.179 - 13.274: 98.7994% ( 1) 00:12:45.088 13.274 - 13.369: 98.8068% ( 1) 00:12:45.088 13.369 - 13.464: 98.8142% ( 1) 00:12:45.088 13.464 - 13.559: 98.8216% ( 1) 00:12:45.088 13.559 - 13.653: 98.8364% ( 2) 00:12:45.088 13.653 - 13.748: 98.8438% ( 1) 00:12:45.088 14.033 - 14.127: 98.8513% ( 1) 00:12:45.088 14.127 - 14.222: 98.8587% ( 1) 00:12:45.088 14.222 - 14.317: 98.8735% ( 2) 00:12:45.088 14.317 - 14.412: 98.8957% ( 3) 00:12:45.088 14.412 - 14.507: 98.9105% ( 2) 00:12:45.088 14.507 - 14.601: 98.9180% ( 1) 00:12:45.088 14.601 - 14.696: 98.9254% ( 1) 00:12:45.088 14.696 - 14.791: 98.9328% ( 1) 00:12:45.088 14.886 - 14.981: 98.9624% ( 4) 00:12:45.088 14.981 - 15.076: 98.9698% ( 1) 00:12:45.088 16.972 - 17.067: 98.9772% ( 1) 00:12:45.088 17.351 - 17.446: 98.9995% ( 3) 00:12:45.088 17.446 - 17.541: 99.0291% ( 4) 00:12:45.088 17.541 - 17.636: 99.0810% ( 7) 00:12:45.088 17.636 - 17.730: 99.1032% ( 3) 00:12:45.088 17.730 - 17.825: 99.1477% ( 6) 00:12:45.088 17.825 - 17.920: 99.1774% ( 4) 00:12:45.088 17.920 - 18.015: 99.2218% ( 6) 00:12:45.088 18.015 - 18.110: 99.2811% ( 8) 00:12:45.088 18.110 - 18.204: 99.3330% ( 7) 00:12:45.089 18.204 - 18.299: 99.3923% ( 8) 00:12:45.089 18.299 - 18.394: 99.4516% ( 8) 00:12:45.089 18.394 - 18.489: 99.5257% ( 10) 00:12:45.089 18.489 - 18.584: 99.5924% ( 9) 00:12:45.089 18.584 - 18.679: 99.6443% ( 7) 00:12:45.089 18.679 - 18.773: 99.6517% ( 1) 00:12:45.089 18.773 - 18.868: 99.7035% ( 7) 00:12:45.089 18.868 - 18.963: 99.7258% ( 3) 00:12:45.089 18.963 - 19.058: 99.7406% ( 2) 00:12:45.089 19.153 - 19.247: 99.7480% ( 1) 00:12:45.089 19.247 - 19.342: 99.7628% ( 2) 00:12:45.089 19.437 - 19.532: 99.7703% ( 1) 00:12:45.089 19.532 - 19.627: 99.7777% ( 1) 00:12:45.089 19.816 - 19.911: 99.7851% ( 1) 00:12:45.089 19.911 - 20.006: 99.7925% ( 1) 00:12:45.089 20.006 - 20.101: 99.7999% ( 1) 00:12:45.089 20.101 - 20.196: 99.8147% ( 2) 00:12:45.089 20.670 - 20.764: 99.8221% ( 1) 00:12:45.089 21.523 - 21.618: 99.8295% ( 1) 00:12:45.089 21.618 - 21.713: 99.8370% ( 1) 00:12:45.089 22.566 - 22.661: 99.8444% ( 1) 00:12:45.089 22.945 - 23.040: 99.8518% ( 1) 00:12:45.089 23.893 - 23.988: 99.8592% ( 1) 00:12:45.089 24.652 - 24.841: 99.8666% ( 1) 00:12:45.089 25.600 - 25.790: 99.8740% ( 1) 00:12:45.089 27.117 - 27.307: 99.8814% ( 1) 00:12:45.089 29.203 - 29.393: 99.8888% ( 1) 00:12:45.089 3980.705 - 4004.978: 99.9555% ( 9) 00:12:45.089 4004.978 - 4029.250: 100.0000% ( 6) 00:12:45.089 00:12:45.089 Complete histogram 00:12:45.089 ================== 00:12:45.089 Range in us Cumulative Count 00:12:45.089 2.062 - 2.074: 0.0222% ( 3) 00:12:45.089 2.074 - 2.086: 15.4080% ( 2076) 00:12:45.089 2.086 - 2.098: 38.8868% ( 3168) 00:12:45.089 2.098 - 2.110: 40.5618% ( 226) 00:12:45.089 2.110 - 2.121: 48.1731% ( 1027) 00:12:45.089 2.121 - 2.133: 55.1768% ( 945) 00:12:45.089 2.133 - 2.145: 57.0889% ( 258) 00:12:45.089 2.145 - 2.157: 65.8860% ( 1187) 00:12:45.089 2.157 - 2.169: 70.7849% ( 661) 00:12:45.089 2.169 - 2.181: 71.7928% ( 136) 00:12:45.089 2.181 - 2.193: 76.2099% ( 596) 00:12:45.089 2.193 - 2.204: 78.9965% ( 376) 00:12:45.089 2.204 - 2.216: 79.6561% ( 89) 00:12:45.089 2.216 - 2.228: 83.7027% ( 546) 00:12:45.089 2.228 - 2.240: 87.5639% ( 521) 00:12:45.089 2.240 - 2.252: 89.1499% ( 214) 00:12:45.089 2.252 - 2.264: 91.6031% ( 331) 00:12:45.089 2.264 - 2.276: 93.1298% ( 206) 00:12:45.089 2.276 - 2.287: 93.5077% ( 51) 00:12:45.089 2.287 - 2.299: 93.8857% ( 51) 00:12:45.089 2.299 - 2.311: 94.7306% ( 114) 00:12:45.089 2.311 - 2.323: 95.4643% ( 99) 00:12:45.089 2.323 - 2.335: 95.6274% ( 22) 00:12:45.089 2.335 - 2.347: 95.7163% ( 12) 00:12:45.089 2.347 - 2.359: 95.7978% ( 11) 00:12:45.089 2.359 - 2.370: 95.9312% ( 18) 00:12:45.089 2.370 - 2.382: 96.1758% ( 33) 00:12:45.089 2.382 - 2.394: 96.7168% ( 73) 00:12:45.089 2.394 - 2.406: 97.1096% ( 53) 00:12:45.089 2.406 - 2.418: 97.3171% ( 28) 00:12:45.089 2.418 - 2.430: 97.4505% ( 18) 00:12:45.089 2.430 - 2.441: 97.5988% ( 20) 00:12:45.089 2.441 - 2.453: 97.6729% ( 10) 00:12:45.089 2.453 - 2.465: 97.8804% ( 28) 00:12:45.089 2.465 - 2.477: 97.9767% ( 13) 00:12:45.089 2.477 - 2.489: 98.0879% ( 15) 00:12:45.089 2.489 - 2.501: 98.1694% ( 11) 00:12:45.089 2.501 - 2.513: 98.2139% ( 6) 00:12:45.089 2.513 - 2.524: 98.2435% ( 4) 00:12:45.089 2.524 - 2.536: 98.2880% ( 6) 00:12:45.089 2.536 - 2.548: 98.3251% ( 5) 00:12:45.089 2.548 - 2.560: 98.3621% ( 5) 00:12:45.089 2.560 - 2.572: 98.3695% ( 1) 00:12:45.089 2.584 - 2.596: 98.3918% ( 3) 00:12:45.089 2.596 - 2.607: 98.3992% ( 1) 00:12:45.089 2.619 - 2.631: 98.4066% ( 1) 00:12:45.089 2.631 - 2.643: 98.4288% ( 3) 00:12:45.089 2.643 - 2.655: 98.4362% ( 1) 00:12:45.089 2.655 - 2.667: 98.4585% ( 3) 00:12:45.089 2.667 - 2.679: 98.4733% ( 2) 00:12:45.089 2.679 - 2.690: 98.4807% ( 1) 00:12:45.089 2.690 - 2.702: 98.4955% ( 2) 00:12:45.089 2.702 - 2.714: 98.5029% ( 1) 00:12:45.089 2.714 - 2.726: 98.5177% ( 2) 00:12:45.089 2.797 - 2.809: 98.5252% ( 1) 00:12:45.089 2.809 - 2.821: 98.5326% ( 1) 00:12:45.089 2.892 - 2.904: 98.5400% ( 1) 00:12:45.089 2.963 - 2.975: 98.5548% ( 2) 00:12:45.089 2.975 - 2.987: 98.5622% ( 1) 00:12:45.089 3.010 - 3.022: 98.5696% ( 1) 00:12:45.089 3.058 - 3.081: 98.5770% ( 1) 00:12:45.089 3.437 - 3.461: 98.5845% ( 1) 00:12:45.089 3.508 - 3.532: 98.6067% ( 3) 00:12:45.089 3.532 - 3.556: 98.6141% ( 1) 00:12:45.089 3.556 - 3.579: 98.6215% ( 1) 00:12:45.089 3.603 - 3.627: 98.6289% ( 1) 00:12:45.089 3.627 - 3.650: 98.6437% ( 2) 00:12:45.089 3.650 - 3.674: 98.6586% ( 2) 00:12:45.089 3.674 - 3.698: 98.6808% ( 3) 00:12:45.089 3.769 - 3.793: 98.6882% ( 1) 00:12:45.089 3.864 - 3.887: 98.6956% ( 1) 00:12:45.089 3.887 - 3.911: 98.7030% ( 1) 00:12:45.089 3.959 - 3.982: 98.7104% ( 1) 00:12:45.089 3.982 - 4.006: 98.7179% ( 1) 00:12:45.089 4.148 - 4.172: 98.7253% ( 1) 00:12:45.089 4.172 - 4.196: 98.7327% ( 1) 00:12:45.089 4.243 - 4.267: 98.7401% ( 1) 00:12:45.089 4.267 - 4.290: 98.7475% ( 1) 00:12:45.089 4.741 - 4.764: 98.7549% ( 1) 00:12:45.089 5.073 - 5.096: 98.7623% ( 1) 00:12:45.089 5.760 - 5.784: 98.7697% ( 1) 00:12:45.089 5.807 - 5.831: 98.7771% ( 1) 00:12:45.089 5.831 - 5.855: 98.7846% ( 1) 00:12:45.089 6.447 - 6.495: 98.7920% ( 1) 00:12:45.089 6.684 - 6.732: 98.7994% ( 1) 00:12:45.089 6.732 - 6.779: 98.8068% ( 1) 00:12:45.089 6.874 - 6.921: 98.8142% ( 1) 00:12:45.089 7.111 - 7.159: 98.8216% ( 1) 00:12:45.089 7.159 - 7.206: 98.8290% ( 1) 00:12:45.089 7.585 - 7.633: 98.8364% ( 1) 00:12:45.089 7.680 - 7.727: 98.8438% ( 1) 00:12:45.089 8.770 - 8.818: 98.8513% ( 1) 00:12:45.089 13.274 - 13.369: 98.8587% ( 1) 00:12:45.089 15.170 - 15.265: 98.8661% ( 1) 00:12:45.089 15.644 - 15.739: 98.8735% ( 1) 00:12:45.089 15.739 - 15.834: 98.8809% ( 1) 00:12:45.089 15.834 - 15.929: 98.8883% ( 1) 00:12:45.089 15.929 - 16.024: 98.9031% ( 2) 00:12:45.089 16.024 - 16.119: 98.9402% ( 5) 00:12:45.089 16.119 - 16.213: 98.9698% ( 4) 00:12:45.089 16.213 - 16.308: 98.9921% ( 3) 00:12:45.089 16.308 - 16.403: 99.0069% ( 2) 00:12:45.089 16.403 - 16.498: 99.0143% ( 1) 00:12:45.089 16.498 - 16.593: 99.0588% ( 6) 00:12:45.089 16.593 - 16.687: 99.0958% ( 5) 00:12:45.089 16.687 - 16.782: 99.1403% ( 6) 00:12:45.089 16.782 - 16.877: 99.1848% ( 6) 00:12:45.089 16.877 - 16.972: 99.2218% ( 5) 00:12:45.089 16.972 - 17.067: 99.2515% ( 4) 00:12:45.089 17.067 - 17.161: 99.2737% ( 3) 00:12:45.089 17.161 - 17.256: 99.2811% ( 1) 00:12:45.089 17.351 - 17.446: 99.2959% ( 2) 00:12:45.089 17.636 - 17.730: 99.3108% ( 2) 00:12:45.089 17.825 - 17.920: 99.3182% ( 1) 00:12:45.089 17.920 - 18.015: 99.3256% ( 1) 00:12:45.089 18.015 - 18.110: 99.3330% ( 1) 00:12:45.089 18.204 - 18.299: 99.3404% ( 1) 00:12:45.089 18.394 - 18.489: 99.3478% ( 1) 00:12:45.089 18.773 - 18.868: 99.3552% ( 1) 00:12:45.089 20.006 - 20.101: 99.3626% ( 1) 00:12:45.089 20.101 - 20.196: 99.3700% ( 1) 00:12:45.089 23.609 - 23.704: 99.3775% ( 1) 00:12:45.089 23.704 - 23.799: 99.3849% ( 1) 00:12:45.089 34.133 - 34.323: 99.3923% ( 1) 00:12:45.089 164.599 - 165.357: 99.3997%[2024-07-24 20:40:40.329122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.089 ( 1) 00:12:45.089 1389.606 - 1395.674: 99.4071% ( 1) 00:12:45.089 3980.705 - 4004.978: 99.8221% ( 56) 00:12:45.089 4004.978 - 4029.250: 99.9926% ( 23) 00:12:45.089 5995.330 - 6019.603: 100.0000% ( 1) 00:12:45.089 00:12:45.089 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:45.089 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:45.089 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:45.089 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:45.089 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:45.089 [ 00:12:45.089 { 00:12:45.089 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.089 "subtype": "Discovery", 00:12:45.089 "listen_addresses": [], 00:12:45.089 "allow_any_host": true, 00:12:45.089 "hosts": [] 00:12:45.089 }, 00:12:45.089 { 00:12:45.089 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:45.089 "subtype": "NVMe", 00:12:45.090 "listen_addresses": [ 00:12:45.090 { 00:12:45.090 "trtype": "VFIOUSER", 00:12:45.090 "adrfam": "IPv4", 00:12:45.090 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:45.090 "trsvcid": "0" 00:12:45.090 } 00:12:45.090 ], 00:12:45.090 "allow_any_host": true, 00:12:45.090 "hosts": [], 00:12:45.090 "serial_number": "SPDK1", 00:12:45.090 "model_number": "SPDK bdev Controller", 00:12:45.090 "max_namespaces": 32, 00:12:45.090 "min_cntlid": 1, 00:12:45.090 "max_cntlid": 65519, 00:12:45.090 "namespaces": [ 00:12:45.090 { 00:12:45.090 "nsid": 1, 00:12:45.090 "bdev_name": "Malloc1", 00:12:45.090 "name": "Malloc1", 00:12:45.090 "nguid": "3DD674A8CD0A4C2FA2FBC0F6FFC8DA36", 00:12:45.090 "uuid": "3dd674a8-cd0a-4c2f-a2fb-c0f6ffc8da36" 00:12:45.090 }, 00:12:45.090 { 00:12:45.090 "nsid": 2, 00:12:45.090 "bdev_name": "Malloc3", 00:12:45.090 "name": "Malloc3", 00:12:45.090 "nguid": "F00B106F78784DB6A4F60F5CEA7E12C1", 00:12:45.090 "uuid": "f00b106f-7878-4db6-a4f6-0f5cea7e12c1" 00:12:45.090 } 00:12:45.090 ] 00:12:45.090 }, 00:12:45.090 { 00:12:45.090 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:45.090 "subtype": "NVMe", 00:12:45.090 "listen_addresses": [ 00:12:45.090 { 00:12:45.090 "trtype": "VFIOUSER", 00:12:45.090 "adrfam": "IPv4", 00:12:45.090 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:45.090 "trsvcid": "0" 00:12:45.090 } 00:12:45.090 ], 00:12:45.090 "allow_any_host": true, 00:12:45.090 "hosts": [], 00:12:45.090 "serial_number": "SPDK2", 00:12:45.090 "model_number": "SPDK bdev Controller", 00:12:45.090 "max_namespaces": 32, 00:12:45.090 "min_cntlid": 1, 00:12:45.090 "max_cntlid": 65519, 00:12:45.090 "namespaces": [ 00:12:45.090 { 00:12:45.090 "nsid": 1, 00:12:45.090 "bdev_name": "Malloc2", 00:12:45.090 "name": "Malloc2", 00:12:45.090 "nguid": "6E4BB324A6C54344AF1F5623B3B23735", 00:12:45.090 "uuid": "6e4bb324-a6c5-4344-af1f-5623b3b23735" 00:12:45.090 } 00:12:45.090 ] 00:12:45.090 } 00:12:45.090 ] 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1572308 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:45.090 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:45.348 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.348 [2024-07-24 20:40:40.801741] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.348 Malloc4 00:12:45.605 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:45.605 [2024-07-24 20:40:41.158384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.863 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:45.863 Asynchronous Event Request test 00:12:45.863 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.863 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.863 Registering asynchronous event callbacks... 00:12:45.863 Starting namespace attribute notice tests for all controllers... 00:12:45.863 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:45.863 aer_cb - Changed Namespace 00:12:45.863 Cleaning up... 00:12:45.863 [ 00:12:45.863 { 00:12:45.863 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.863 "subtype": "Discovery", 00:12:45.863 "listen_addresses": [], 00:12:45.863 "allow_any_host": true, 00:12:45.863 "hosts": [] 00:12:45.863 }, 00:12:45.863 { 00:12:45.863 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:45.863 "subtype": "NVMe", 00:12:45.863 "listen_addresses": [ 00:12:45.863 { 00:12:45.863 "trtype": "VFIOUSER", 00:12:45.863 "adrfam": "IPv4", 00:12:45.863 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:45.863 "trsvcid": "0" 00:12:45.863 } 00:12:45.863 ], 00:12:45.863 "allow_any_host": true, 00:12:45.863 "hosts": [], 00:12:45.863 "serial_number": "SPDK1", 00:12:45.863 "model_number": "SPDK bdev Controller", 00:12:45.863 "max_namespaces": 32, 00:12:45.863 "min_cntlid": 1, 00:12:45.863 "max_cntlid": 65519, 00:12:45.863 "namespaces": [ 00:12:45.863 { 00:12:45.863 "nsid": 1, 00:12:45.863 "bdev_name": "Malloc1", 00:12:45.863 "name": "Malloc1", 00:12:45.863 "nguid": "3DD674A8CD0A4C2FA2FBC0F6FFC8DA36", 00:12:45.863 "uuid": "3dd674a8-cd0a-4c2f-a2fb-c0f6ffc8da36" 00:12:45.863 }, 00:12:45.863 { 00:12:45.863 "nsid": 2, 00:12:45.863 "bdev_name": "Malloc3", 00:12:45.863 "name": "Malloc3", 00:12:45.863 "nguid": "F00B106F78784DB6A4F60F5CEA7E12C1", 00:12:45.863 "uuid": "f00b106f-7878-4db6-a4f6-0f5cea7e12c1" 00:12:45.863 } 00:12:45.863 ] 00:12:45.863 }, 00:12:45.863 { 00:12:45.863 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:45.863 "subtype": "NVMe", 00:12:45.863 "listen_addresses": [ 00:12:45.863 { 00:12:45.863 "trtype": "VFIOUSER", 00:12:45.863 "adrfam": "IPv4", 00:12:45.863 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:45.863 "trsvcid": "0" 00:12:45.863 } 00:12:45.863 ], 00:12:45.863 "allow_any_host": true, 00:12:45.863 "hosts": [], 00:12:45.863 "serial_number": "SPDK2", 00:12:45.863 "model_number": "SPDK bdev Controller", 00:12:45.863 "max_namespaces": 32, 00:12:45.863 "min_cntlid": 1, 00:12:45.863 "max_cntlid": 65519, 00:12:45.863 "namespaces": [ 00:12:45.863 { 00:12:45.863 "nsid": 1, 00:12:45.863 "bdev_name": "Malloc2", 00:12:45.863 "name": "Malloc2", 00:12:45.863 "nguid": "6E4BB324A6C54344AF1F5623B3B23735", 00:12:45.863 "uuid": "6e4bb324-a6c5-4344-af1f-5623b3b23735" 00:12:45.863 }, 00:12:45.863 { 00:12:45.863 "nsid": 2, 00:12:45.863 "bdev_name": "Malloc4", 00:12:45.863 "name": "Malloc4", 00:12:45.863 "nguid": "82F480DDA6EB4ADCBC8BB2860DECE524", 00:12:45.863 "uuid": "82f480dd-a6eb-4adc-bc8b-b2860dece524" 00:12:45.863 } 00:12:45.863 ] 00:12:45.863 } 00:12:45.863 ] 00:12:45.863 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1572308 00:12:45.863 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:45.863 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1566712 00:12:45.863 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1566712 ']' 00:12:45.866 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1566712 00:12:45.866 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:46.124 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.124 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1566712 00:12:46.124 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.124 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.124 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1566712' 00:12:46.124 killing process with pid 1566712 00:12:46.124 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1566712 00:12:46.124 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1566712 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1572452 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1572452' 00:12:46.383 Process pid: 1572452 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1572452 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1572452 ']' 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.383 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:46.383 [2024-07-24 20:40:41.887777] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:46.383 [2024-07-24 20:40:41.888774] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:12:46.383 [2024-07-24 20:40:41.888831] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.383 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.383 [2024-07-24 20:40:41.945982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.641 [2024-07-24 20:40:42.052357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.641 [2024-07-24 20:40:42.052405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.641 [2024-07-24 20:40:42.052434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.641 [2024-07-24 20:40:42.052446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.641 [2024-07-24 20:40:42.052455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.641 [2024-07-24 20:40:42.052509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.641 [2024-07-24 20:40:42.052571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.641 [2024-07-24 20:40:42.052636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.641 [2024-07-24 20:40:42.052639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.641 [2024-07-24 20:40:42.152935] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:46.641 [2024-07-24 20:40:42.153174] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:46.641 [2024-07-24 20:40:42.153462] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:46.641 [2024-07-24 20:40:42.154090] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:46.641 [2024-07-24 20:40:42.154334] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:46.641 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.641 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:12:46.641 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:48.013 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:48.013 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:48.013 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:48.013 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:48.013 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:48.013 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:48.271 Malloc1 00:12:48.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:48.529 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:48.786 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:49.044 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:49.044 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:49.044 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:49.301 Malloc2 00:12:49.301 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:49.559 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:49.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1572452 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1572452 ']' 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1572452 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1572452 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1572452' 00:12:50.073 killing process with pid 1572452 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1572452 00:12:50.073 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1572452 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:50.331 00:12:50.331 real 0m52.708s 00:12:50.331 user 3m27.682s 00:12:50.331 sys 0m4.394s 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:50.331 ************************************ 00:12:50.331 END TEST nvmf_vfio_user 00:12:50.331 ************************************ 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.331 20:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.589 ************************************ 00:12:50.589 START TEST nvmf_vfio_user_nvme_compliance 00:12:50.589 ************************************ 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:50.589 * Looking for test storage... 00:12:50.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.589 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1572946 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1572946' 00:12:50.590 Process pid: 1572946 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1572946 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1572946 ']' 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.590 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:50.590 [2024-07-24 20:40:46.017671] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:12:50.590 [2024-07-24 20:40:46.017775] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.590 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.590 [2024-07-24 20:40:46.081591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.848 [2024-07-24 20:40:46.204636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.848 [2024-07-24 20:40:46.204696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.848 [2024-07-24 20:40:46.204712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.848 [2024-07-24 20:40:46.204725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.848 [2024-07-24 20:40:46.204737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.848 [2024-07-24 20:40:46.204827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.848 [2024-07-24 20:40:46.204908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.848 [2024-07-24 20:40:46.204911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.848 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.848 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:12:50.848 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.780 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.038 malloc0 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.038 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:52.038 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.038 00:12:52.038 00:12:52.038 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.038 http://cunit.sourceforge.net/ 00:12:52.038 00:12:52.038 00:12:52.038 Suite: nvme_compliance 00:12:52.038 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 20:40:47.538247] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.038 [2024-07-24 20:40:47.539727] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:52.038 [2024-07-24 20:40:47.539751] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:52.038 [2024-07-24 20:40:47.539778] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:52.038 [2024-07-24 20:40:47.544310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.038 passed 00:12:52.295 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 20:40:47.626857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.295 [2024-07-24 20:40:47.629880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.295 passed 00:12:52.295 Test: admin_identify_ns ...[2024-07-24 20:40:47.716768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.295 [2024-07-24 20:40:47.776278] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:52.295 [2024-07-24 20:40:47.784273] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:52.295 [2024-07-24 20:40:47.805407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.295 passed 00:12:52.553 Test: admin_get_features_mandatory_features ...[2024-07-24 20:40:47.890584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.553 [2024-07-24 20:40:47.893605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.553 passed 00:12:52.553 Test: admin_get_features_optional_features ...[2024-07-24 20:40:47.977130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.553 [2024-07-24 20:40:47.980152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.553 passed 00:12:52.553 Test: admin_set_features_number_of_queues ...[2024-07-24 20:40:48.064731] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.810 [2024-07-24 20:40:48.169374] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.810 passed 00:12:52.810 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 20:40:48.252476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:52.810 [2024-07-24 20:40:48.255493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:52.810 passed 00:12:52.810 Test: admin_get_log_page_with_lpo ...[2024-07-24 20:40:48.338821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.068 [2024-07-24 20:40:48.405274] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:53.068 [2024-07-24 20:40:48.418341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.068 passed 00:12:53.068 Test: fabric_property_get ...[2024-07-24 20:40:48.501954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.068 [2024-07-24 20:40:48.503219] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:53.068 [2024-07-24 20:40:48.504974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.068 passed 00:12:53.068 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 20:40:48.586518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.068 [2024-07-24 20:40:48.587825] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:53.068 [2024-07-24 20:40:48.589554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.068 passed 00:12:53.326 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 20:40:48.671764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.326 [2024-07-24 20:40:48.759267] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:53.326 [2024-07-24 20:40:48.775251] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:53.326 [2024-07-24 20:40:48.780362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.326 passed 00:12:53.326 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 20:40:48.862929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.326 [2024-07-24 20:40:48.864285] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:53.326 [2024-07-24 20:40:48.865954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.583 passed 00:12:53.583 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 20:40:48.945754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.583 [2024-07-24 20:40:49.025269] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:53.583 [2024-07-24 20:40:49.049269] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:53.583 [2024-07-24 20:40:49.054352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.583 passed 00:12:53.583 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 20:40:49.136872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.583 [2024-07-24 20:40:49.138142] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:53.583 [2024-07-24 20:40:49.138196] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:53.583 [2024-07-24 20:40:49.139895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.841 passed 00:12:53.841 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 20:40:49.222762] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:53.841 [2024-07-24 20:40:49.315258] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:53.841 [2024-07-24 20:40:49.323256] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:53.841 [2024-07-24 20:40:49.331259] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:53.841 [2024-07-24 20:40:49.339257] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:53.841 [2024-07-24 20:40:49.367366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:53.841 passed 00:12:54.098 Test: admin_create_io_sq_verify_pc ...[2024-07-24 20:40:49.451292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.098 [2024-07-24 20:40:49.469267] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:54.098 [2024-07-24 20:40:49.486663] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.098 passed 00:12:54.098 Test: admin_create_io_qp_max_qps ...[2024-07-24 20:40:49.569222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.499 [2024-07-24 20:40:50.664259] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:55.499 [2024-07-24 20:40:51.044641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.756 passed 00:12:55.756 Test: admin_create_io_sq_shared_cq ...[2024-07-24 20:40:51.127735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.756 [2024-07-24 20:40:51.263255] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:55.756 [2024-07-24 20:40:51.300356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.014 passed 00:12:56.014 00:12:56.014 Run Summary: Type Total Ran Passed Failed Inactive 00:12:56.014 suites 1 1 n/a 0 0 00:12:56.014 tests 18 18 18 0 0 00:12:56.014 asserts 360 360 360 0 n/a 00:12:56.014 00:12:56.014 Elapsed time = 1.557 seconds 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1572946 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1572946 ']' 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1572946 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1572946 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1572946' 00:12:56.014 killing process with pid 1572946 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1572946 00:12:56.014 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1572946 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:56.273 00:12:56.273 real 0m5.777s 00:12:56.273 user 0m16.147s 00:12:56.273 sys 0m0.540s 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.273 ************************************ 00:12:56.273 END TEST nvmf_vfio_user_nvme_compliance 00:12:56.273 ************************************ 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.273 ************************************ 00:12:56.273 START TEST nvmf_vfio_user_fuzz 00:12:56.273 ************************************ 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:56.273 * Looking for test storage... 00:12:56.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:56.273 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1573786 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1573786' 00:12:56.274 Process pid: 1573786 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1573786 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1573786 ']' 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.274 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:56.840 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.840 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:12:56.840 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 malloc0 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:57.773 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:29.829 Fuzzing completed. Shutting down the fuzz application 00:13:29.829 00:13:29.829 Dumping successful admin opcodes: 00:13:29.829 8, 9, 10, 24, 00:13:29.829 Dumping successful io opcodes: 00:13:29.829 0, 00:13:29.829 NS: 0x200003a1ef00 I/O qp, Total commands completed: 745417, total successful commands: 2881, random_seed: 3181480576 00:13:29.829 NS: 0x200003a1ef00 admin qp, Total commands completed: 95176, total successful commands: 772, random_seed: 576967424 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1573786 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1573786 ']' 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1573786 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1573786 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1573786' 00:13:29.829 killing process with pid 1573786 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1573786 00:13:29.829 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1573786 00:13:29.829 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:29.829 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:29.829 00:13:29.830 real 0m32.340s 00:13:29.830 user 0m34.174s 00:13:29.830 sys 0m27.598s 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.830 ************************************ 00:13:29.830 END TEST nvmf_vfio_user_fuzz 00:13:29.830 ************************************ 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.830 ************************************ 00:13:29.830 START TEST nvmf_auth_target 00:13:29.830 ************************************ 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:29.830 * Looking for test storage... 00:13:29.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:29.830 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.763 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:30.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:30.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:30.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:30.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:13:30.764 00:13:30.764 --- 10.0.0.2 ping statistics --- 00:13:30.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.764 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:13:30.764 00:13:30.764 --- 10.0.0.1 ping statistics --- 00:13:30.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.764 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1579110 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1579110 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1579110 ']' 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.764 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.765 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1579138 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=abfb7422c4a048467ecd077af75d01797de0d9da7e76e073 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9LF 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key abfb7422c4a048467ecd077af75d01797de0d9da7e76e073 0 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 abfb7422c4a048467ecd077af75d01797de0d9da7e76e073 0 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=abfb7422c4a048467ecd077af75d01797de0d9da7e76e073 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9LF 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9LF 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.9LF 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=795b6efea194ff3c26bd3eb099b9ab764509b04d663cf74f0d0e170673e0a1ce 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kX3 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 795b6efea194ff3c26bd3eb099b9ab764509b04d663cf74f0d0e170673e0a1ce 3 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 795b6efea194ff3c26bd3eb099b9ab764509b04d663cf74f0d0e170673e0a1ce 3 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=795b6efea194ff3c26bd3eb099b9ab764509b04d663cf74f0d0e170673e0a1ce 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kX3 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kX3 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.kX3 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ea596713ccc1481d08568aad5c39b3ed 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8Pj 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ea596713ccc1481d08568aad5c39b3ed 1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ea596713ccc1481d08568aad5c39b3ed 1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ea596713ccc1481d08568aad5c39b3ed 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8Pj 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8Pj 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.8Pj 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=043d4a588fd553421d784483bd39da86e6615f6c974704dc 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iJT 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 043d4a588fd553421d784483bd39da86e6615f6c974704dc 2 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 043d4a588fd553421d784483bd39da86e6615f6c974704dc 2 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=043d4a588fd553421d784483bd39da86e6615f6c974704dc 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iJT 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iJT 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.iJT 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ced7cce025fe67f49fe40d46f2e0e4bf2973c1bd6664081f 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3AP 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ced7cce025fe67f49fe40d46f2e0e4bf2973c1bd6664081f 2 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ced7cce025fe67f49fe40d46f2e0e4bf2973c1bd6664081f 2 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ced7cce025fe67f49fe40d46f2e0e4bf2973c1bd6664081f 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3AP 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3AP 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.3AP 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=39321f6e006221e3cd5121dade3b5afe 00:13:31.332 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Yr6 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 39321f6e006221e3cd5121dade3b5afe 1 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 39321f6e006221e3cd5121dade3b5afe 1 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=39321f6e006221e3cd5121dade3b5afe 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Yr6 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Yr6 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Yr6 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3fc52855ac01e16286431787e0308bb3ea0915170059fae699cdd2a5aafca000 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Bf4 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3fc52855ac01e16286431787e0308bb3ea0915170059fae699cdd2a5aafca000 3 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3fc52855ac01e16286431787e0308bb3ea0915170059fae699cdd2a5aafca000 3 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3fc52855ac01e16286431787e0308bb3ea0915170059fae699cdd2a5aafca000 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Bf4 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Bf4 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Bf4 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1579110 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1579110 ']' 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.590 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1579138 /var/tmp/host.sock 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1579138 ']' 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:31.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.847 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9LF 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.9LF 00:13:32.105 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.9LF 00:13:32.362 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.kX3 ]] 00:13:32.362 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kX3 00:13:32.362 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.362 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.362 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.362 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kX3 00:13:32.362 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kX3 00:13:32.620 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:32.620 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8Pj 00:13:32.620 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.620 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.620 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.620 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8Pj 00:13:32.620 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8Pj 00:13:32.877 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.iJT ]] 00:13:32.877 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iJT 00:13:32.877 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.877 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.877 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.877 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iJT 00:13:32.877 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iJT 00:13:33.134 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:33.134 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3AP 00:13:33.134 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.134 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.134 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.134 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3AP 00:13:33.134 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3AP 00:13:33.392 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Yr6 ]] 00:13:33.392 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yr6 00:13:33.392 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.392 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.392 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.392 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yr6 00:13:33.392 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yr6 00:13:33.700 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:33.700 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Bf4 00:13:33.700 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.700 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.700 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.700 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Bf4 00:13:33.700 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Bf4 00:13:33.957 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:33.957 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:33.957 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.957 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.957 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:33.957 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.215 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.472 00:13:34.472 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.472 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.472 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.729 { 00:13:34.729 "cntlid": 1, 00:13:34.729 "qid": 0, 00:13:34.729 "state": "enabled", 00:13:34.729 "thread": "nvmf_tgt_poll_group_000", 00:13:34.729 "listen_address": { 00:13:34.729 "trtype": "TCP", 00:13:34.729 "adrfam": "IPv4", 00:13:34.729 "traddr": "10.0.0.2", 00:13:34.729 "trsvcid": "4420" 00:13:34.729 }, 00:13:34.729 "peer_address": { 00:13:34.729 "trtype": "TCP", 00:13:34.729 "adrfam": "IPv4", 00:13:34.729 "traddr": "10.0.0.1", 00:13:34.729 "trsvcid": "42258" 00:13:34.729 }, 00:13:34.729 "auth": { 00:13:34.729 "state": "completed", 00:13:34.729 "digest": "sha256", 00:13:34.729 "dhgroup": "null" 00:13:34.729 } 00:13:34.729 } 00:13:34.729 ]' 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.729 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.987 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:35.918 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:36.176 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:36.176 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.176 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:36.176 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:36.176 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:36.176 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.434 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.434 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.434 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.434 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.434 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.691 00:13:36.691 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.691 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.691 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.950 { 00:13:36.950 "cntlid": 3, 00:13:36.950 "qid": 0, 00:13:36.950 "state": "enabled", 00:13:36.950 "thread": "nvmf_tgt_poll_group_000", 00:13:36.950 "listen_address": { 00:13:36.950 "trtype": "TCP", 00:13:36.950 "adrfam": "IPv4", 00:13:36.950 "traddr": "10.0.0.2", 00:13:36.950 "trsvcid": "4420" 00:13:36.950 }, 00:13:36.950 "peer_address": { 00:13:36.950 "trtype": "TCP", 00:13:36.950 "adrfam": "IPv4", 00:13:36.950 "traddr": "10.0.0.1", 00:13:36.950 "trsvcid": "42292" 00:13:36.950 }, 00:13:36.950 "auth": { 00:13:36.950 "state": "completed", 00:13:36.950 "digest": "sha256", 00:13:36.950 "dhgroup": "null" 00:13:36.950 } 00:13:36.950 } 00:13:36.950 ]' 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.950 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.207 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:38.157 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.436 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.001 00:13:39.001 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.002 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.002 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.260 { 00:13:39.260 "cntlid": 5, 00:13:39.260 "qid": 0, 00:13:39.260 "state": "enabled", 00:13:39.260 "thread": "nvmf_tgt_poll_group_000", 00:13:39.260 "listen_address": { 00:13:39.260 "trtype": "TCP", 00:13:39.260 "adrfam": "IPv4", 00:13:39.260 "traddr": "10.0.0.2", 00:13:39.260 "trsvcid": "4420" 00:13:39.260 }, 00:13:39.260 "peer_address": { 00:13:39.260 "trtype": "TCP", 00:13:39.260 "adrfam": "IPv4", 00:13:39.260 "traddr": "10.0.0.1", 00:13:39.260 "trsvcid": "42320" 00:13:39.260 }, 00:13:39.260 "auth": { 00:13:39.260 "state": "completed", 00:13:39.260 "digest": "sha256", 00:13:39.260 "dhgroup": "null" 00:13:39.260 } 00:13:39.260 } 00:13:39.260 ]' 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.260 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.518 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:13:40.451 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.709 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:40.709 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.709 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.709 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.709 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.709 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:40.709 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:40.966 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:40.966 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.966 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:40.967 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:41.225 00:13:41.225 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.225 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.225 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.483 { 00:13:41.483 "cntlid": 7, 00:13:41.483 "qid": 0, 00:13:41.483 "state": "enabled", 00:13:41.483 "thread": "nvmf_tgt_poll_group_000", 00:13:41.483 "listen_address": { 00:13:41.483 "trtype": "TCP", 00:13:41.483 "adrfam": "IPv4", 00:13:41.483 "traddr": "10.0.0.2", 00:13:41.483 "trsvcid": "4420" 00:13:41.483 }, 00:13:41.483 "peer_address": { 00:13:41.483 "trtype": "TCP", 00:13:41.483 "adrfam": "IPv4", 00:13:41.483 "traddr": "10.0.0.1", 00:13:41.483 "trsvcid": "42346" 00:13:41.483 }, 00:13:41.483 "auth": { 00:13:41.483 "state": "completed", 00:13:41.483 "digest": "sha256", 00:13:41.483 "dhgroup": "null" 00:13:41.483 } 00:13:41.483 } 00:13:41.483 ]' 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.483 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.741 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:42.673 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:43.237 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:43.237 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.237 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:43.237 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:43.237 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:43.238 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.238 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.238 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.238 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.238 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.238 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.238 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.495 00:13:43.495 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.495 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.495 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.753 { 00:13:43.753 "cntlid": 9, 00:13:43.753 "qid": 0, 00:13:43.753 "state": "enabled", 00:13:43.753 "thread": "nvmf_tgt_poll_group_000", 00:13:43.753 "listen_address": { 00:13:43.753 "trtype": "TCP", 00:13:43.753 "adrfam": "IPv4", 00:13:43.753 "traddr": "10.0.0.2", 00:13:43.753 "trsvcid": "4420" 00:13:43.753 }, 00:13:43.753 "peer_address": { 00:13:43.753 "trtype": "TCP", 00:13:43.753 "adrfam": "IPv4", 00:13:43.753 "traddr": "10.0.0.1", 00:13:43.753 "trsvcid": "42390" 00:13:43.753 }, 00:13:43.753 "auth": { 00:13:43.753 "state": "completed", 00:13:43.753 "digest": "sha256", 00:13:43.753 "dhgroup": "ffdhe2048" 00:13:43.753 } 00:13:43.753 } 00:13:43.753 ]' 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.753 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.011 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:44.944 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.202 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.767 00:13:45.767 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.767 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.767 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.025 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.025 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.025 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.026 { 00:13:46.026 "cntlid": 11, 00:13:46.026 "qid": 0, 00:13:46.026 "state": "enabled", 00:13:46.026 "thread": "nvmf_tgt_poll_group_000", 00:13:46.026 "listen_address": { 00:13:46.026 "trtype": "TCP", 00:13:46.026 "adrfam": "IPv4", 00:13:46.026 "traddr": "10.0.0.2", 00:13:46.026 "trsvcid": "4420" 00:13:46.026 }, 00:13:46.026 "peer_address": { 00:13:46.026 "trtype": "TCP", 00:13:46.026 "adrfam": "IPv4", 00:13:46.026 "traddr": "10.0.0.1", 00:13:46.026 "trsvcid": "51700" 00:13:46.026 }, 00:13:46.026 "auth": { 00:13:46.026 "state": "completed", 00:13:46.026 "digest": "sha256", 00:13:46.026 "dhgroup": "ffdhe2048" 00:13:46.026 } 00:13:46.026 } 00:13:46.026 ]' 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.026 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.283 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:47.215 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.473 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.037 00:13:48.037 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.037 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.037 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.294 { 00:13:48.294 "cntlid": 13, 00:13:48.294 "qid": 0, 00:13:48.294 "state": "enabled", 00:13:48.294 "thread": "nvmf_tgt_poll_group_000", 00:13:48.294 "listen_address": { 00:13:48.294 "trtype": "TCP", 00:13:48.294 "adrfam": "IPv4", 00:13:48.294 "traddr": "10.0.0.2", 00:13:48.294 "trsvcid": "4420" 00:13:48.294 }, 00:13:48.294 "peer_address": { 00:13:48.294 "trtype": "TCP", 00:13:48.294 "adrfam": "IPv4", 00:13:48.294 "traddr": "10.0.0.1", 00:13:48.294 "trsvcid": "51740" 00:13:48.294 }, 00:13:48.294 "auth": { 00:13:48.294 "state": "completed", 00:13:48.294 "digest": "sha256", 00:13:48.294 "dhgroup": "ffdhe2048" 00:13:48.294 } 00:13:48.294 } 00:13:48.294 ]' 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.294 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.552 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:49.485 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:49.743 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.309 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.309 { 00:13:50.309 "cntlid": 15, 00:13:50.309 "qid": 0, 00:13:50.309 "state": "enabled", 00:13:50.309 "thread": "nvmf_tgt_poll_group_000", 00:13:50.309 "listen_address": { 00:13:50.309 "trtype": "TCP", 00:13:50.309 "adrfam": "IPv4", 00:13:50.309 "traddr": "10.0.0.2", 00:13:50.309 "trsvcid": "4420" 00:13:50.309 }, 00:13:50.309 "peer_address": { 00:13:50.309 "trtype": "TCP", 00:13:50.309 "adrfam": "IPv4", 00:13:50.309 "traddr": "10.0.0.1", 00:13:50.309 "trsvcid": "51762" 00:13:50.309 }, 00:13:50.309 "auth": { 00:13:50.309 "state": "completed", 00:13:50.309 "digest": "sha256", 00:13:50.309 "dhgroup": "ffdhe2048" 00:13:50.309 } 00:13:50.309 } 00:13:50.309 ]' 00:13:50.309 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.566 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.566 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.566 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:50.566 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.566 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.566 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.566 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.822 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:51.752 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:52.008 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.009 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.625 00:13:52.625 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.625 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.625 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.625 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.625 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.625 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.625 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.625 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.625 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.625 { 00:13:52.625 "cntlid": 17, 00:13:52.625 "qid": 0, 00:13:52.625 "state": "enabled", 00:13:52.625 "thread": "nvmf_tgt_poll_group_000", 00:13:52.625 "listen_address": { 00:13:52.625 "trtype": "TCP", 00:13:52.625 "adrfam": "IPv4", 00:13:52.625 "traddr": "10.0.0.2", 00:13:52.625 "trsvcid": "4420" 00:13:52.625 }, 00:13:52.625 "peer_address": { 00:13:52.625 "trtype": "TCP", 00:13:52.625 "adrfam": "IPv4", 00:13:52.625 "traddr": "10.0.0.1", 00:13:52.625 "trsvcid": "51776" 00:13:52.625 }, 00:13:52.625 "auth": { 00:13:52.625 "state": "completed", 00:13:52.625 "digest": "sha256", 00:13:52.625 "dhgroup": "ffdhe3072" 00:13:52.625 } 00:13:52.625 } 00:13:52.625 ]' 00:13:52.625 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.882 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.882 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.882 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:52.882 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.882 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.882 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.882 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.140 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:54.073 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.331 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.588 00:13:54.588 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.588 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.588 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.151 { 00:13:55.151 "cntlid": 19, 00:13:55.151 "qid": 0, 00:13:55.151 "state": "enabled", 00:13:55.151 "thread": "nvmf_tgt_poll_group_000", 00:13:55.151 "listen_address": { 00:13:55.151 "trtype": "TCP", 00:13:55.151 "adrfam": "IPv4", 00:13:55.151 "traddr": "10.0.0.2", 00:13:55.151 "trsvcid": "4420" 00:13:55.151 }, 00:13:55.151 "peer_address": { 00:13:55.151 "trtype": "TCP", 00:13:55.151 "adrfam": "IPv4", 00:13:55.151 "traddr": "10.0.0.1", 00:13:55.151 "trsvcid": "47044" 00:13:55.151 }, 00:13:55.151 "auth": { 00:13:55.151 "state": "completed", 00:13:55.151 "digest": "sha256", 00:13:55.151 "dhgroup": "ffdhe3072" 00:13:55.151 } 00:13:55.151 } 00:13:55.151 ]' 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.151 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.408 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:56.338 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.597 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.854 00:13:57.110 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.110 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.110 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.366 { 00:13:57.366 "cntlid": 21, 00:13:57.366 "qid": 0, 00:13:57.366 "state": "enabled", 00:13:57.366 "thread": "nvmf_tgt_poll_group_000", 00:13:57.366 "listen_address": { 00:13:57.366 "trtype": "TCP", 00:13:57.366 "adrfam": "IPv4", 00:13:57.366 "traddr": "10.0.0.2", 00:13:57.366 "trsvcid": "4420" 00:13:57.366 }, 00:13:57.366 "peer_address": { 00:13:57.366 "trtype": "TCP", 00:13:57.366 "adrfam": "IPv4", 00:13:57.366 "traddr": "10.0.0.1", 00:13:57.366 "trsvcid": "47078" 00:13:57.366 }, 00:13:57.366 "auth": { 00:13:57.366 "state": "completed", 00:13:57.366 "digest": "sha256", 00:13:57.366 "dhgroup": "ffdhe3072" 00:13:57.366 } 00:13:57.366 } 00:13:57.366 ]' 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.366 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.623 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:58.555 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:58.812 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.377 00:13:59.377 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.378 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.378 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.635 { 00:13:59.635 "cntlid": 23, 00:13:59.635 "qid": 0, 00:13:59.635 "state": "enabled", 00:13:59.635 "thread": "nvmf_tgt_poll_group_000", 00:13:59.635 "listen_address": { 00:13:59.635 "trtype": "TCP", 00:13:59.635 "adrfam": "IPv4", 00:13:59.635 "traddr": "10.0.0.2", 00:13:59.635 "trsvcid": "4420" 00:13:59.635 }, 00:13:59.635 "peer_address": { 00:13:59.635 "trtype": "TCP", 00:13:59.635 "adrfam": "IPv4", 00:13:59.635 "traddr": "10.0.0.1", 00:13:59.635 "trsvcid": "47104" 00:13:59.635 }, 00:13:59.635 "auth": { 00:13:59.635 "state": "completed", 00:13:59.635 "digest": "sha256", 00:13:59.635 "dhgroup": "ffdhe3072" 00:13:59.635 } 00:13:59.635 } 00:13:59.635 ]' 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.635 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.635 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.635 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:59.635 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.635 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.635 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.635 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.892 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:00.824 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.082 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.647 00:14:01.647 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.647 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.647 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.905 { 00:14:01.905 "cntlid": 25, 00:14:01.905 "qid": 0, 00:14:01.905 "state": "enabled", 00:14:01.905 "thread": "nvmf_tgt_poll_group_000", 00:14:01.905 "listen_address": { 00:14:01.905 "trtype": "TCP", 00:14:01.905 "adrfam": "IPv4", 00:14:01.905 "traddr": "10.0.0.2", 00:14:01.905 "trsvcid": "4420" 00:14:01.905 }, 00:14:01.905 "peer_address": { 00:14:01.905 "trtype": "TCP", 00:14:01.905 "adrfam": "IPv4", 00:14:01.905 "traddr": "10.0.0.1", 00:14:01.905 "trsvcid": "47134" 00:14:01.905 }, 00:14:01.905 "auth": { 00:14:01.905 "state": "completed", 00:14:01.905 "digest": "sha256", 00:14:01.905 "dhgroup": "ffdhe4096" 00:14:01.905 } 00:14:01.905 } 00:14:01.905 ]' 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.905 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.163 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:03.566 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.566 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.132 00:14:04.132 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.132 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.132 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.390 { 00:14:04.390 "cntlid": 27, 00:14:04.390 "qid": 0, 00:14:04.390 "state": "enabled", 00:14:04.390 "thread": "nvmf_tgt_poll_group_000", 00:14:04.390 "listen_address": { 00:14:04.390 "trtype": "TCP", 00:14:04.390 "adrfam": "IPv4", 00:14:04.390 "traddr": "10.0.0.2", 00:14:04.390 "trsvcid": "4420" 00:14:04.390 }, 00:14:04.390 "peer_address": { 00:14:04.390 "trtype": "TCP", 00:14:04.390 "adrfam": "IPv4", 00:14:04.390 "traddr": "10.0.0.1", 00:14:04.390 "trsvcid": "47156" 00:14:04.390 }, 00:14:04.390 "auth": { 00:14:04.390 "state": "completed", 00:14:04.390 "digest": "sha256", 00:14:04.390 "dhgroup": "ffdhe4096" 00:14:04.390 } 00:14:04.390 } 00:14:04.390 ]' 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.390 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.648 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:05.579 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.835 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.836 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.836 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.836 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.836 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.397 00:14:06.397 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.397 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.397 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.654 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.654 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.654 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.654 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.654 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.654 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.654 { 00:14:06.654 "cntlid": 29, 00:14:06.654 "qid": 0, 00:14:06.654 "state": "enabled", 00:14:06.654 "thread": "nvmf_tgt_poll_group_000", 00:14:06.654 "listen_address": { 00:14:06.654 "trtype": "TCP", 00:14:06.654 "adrfam": "IPv4", 00:14:06.654 "traddr": "10.0.0.2", 00:14:06.654 "trsvcid": "4420" 00:14:06.654 }, 00:14:06.654 "peer_address": { 00:14:06.654 "trtype": "TCP", 00:14:06.654 "adrfam": "IPv4", 00:14:06.654 "traddr": "10.0.0.1", 00:14:06.654 "trsvcid": "59956" 00:14:06.654 }, 00:14:06.654 "auth": { 00:14:06.654 "state": "completed", 00:14:06.654 "digest": "sha256", 00:14:06.654 "dhgroup": "ffdhe4096" 00:14:06.654 } 00:14:06.654 } 00:14:06.654 ]' 00:14:06.654 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.654 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.654 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.654 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.654 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.654 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.654 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.654 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.911 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:07.842 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.157 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.415 00:14:08.415 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.415 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.415 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.672 { 00:14:08.672 "cntlid": 31, 00:14:08.672 "qid": 0, 00:14:08.672 "state": "enabled", 00:14:08.672 "thread": "nvmf_tgt_poll_group_000", 00:14:08.672 "listen_address": { 00:14:08.672 "trtype": "TCP", 00:14:08.672 "adrfam": "IPv4", 00:14:08.672 "traddr": "10.0.0.2", 00:14:08.672 "trsvcid": "4420" 00:14:08.672 }, 00:14:08.672 "peer_address": { 00:14:08.672 "trtype": "TCP", 00:14:08.672 "adrfam": "IPv4", 00:14:08.672 "traddr": "10.0.0.1", 00:14:08.672 "trsvcid": "59994" 00:14:08.672 }, 00:14:08.672 "auth": { 00:14:08.672 "state": "completed", 00:14:08.672 "digest": "sha256", 00:14:08.672 "dhgroup": "ffdhe4096" 00:14:08.672 } 00:14:08.672 } 00:14:08.672 ]' 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.672 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.673 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:08.673 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.929 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.929 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.929 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.186 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:10.117 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.374 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.938 00:14:10.938 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.938 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.938 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.196 { 00:14:11.196 "cntlid": 33, 00:14:11.196 "qid": 0, 00:14:11.196 "state": "enabled", 00:14:11.196 "thread": "nvmf_tgt_poll_group_000", 00:14:11.196 "listen_address": { 00:14:11.196 "trtype": "TCP", 00:14:11.196 "adrfam": "IPv4", 00:14:11.196 "traddr": "10.0.0.2", 00:14:11.196 "trsvcid": "4420" 00:14:11.196 }, 00:14:11.196 "peer_address": { 00:14:11.196 "trtype": "TCP", 00:14:11.196 "adrfam": "IPv4", 00:14:11.196 "traddr": "10.0.0.1", 00:14:11.196 "trsvcid": "60022" 00:14:11.196 }, 00:14:11.196 "auth": { 00:14:11.196 "state": "completed", 00:14:11.196 "digest": "sha256", 00:14:11.196 "dhgroup": "ffdhe6144" 00:14:11.196 } 00:14:11.196 } 00:14:11.196 ]' 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.196 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.197 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.197 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.454 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:14:12.399 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.399 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.399 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.399 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.656 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.656 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.656 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:12.656 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.656 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.914 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.914 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.914 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.478 00:14:13.478 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.478 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.478 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.736 { 00:14:13.736 "cntlid": 35, 00:14:13.736 "qid": 0, 00:14:13.736 "state": "enabled", 00:14:13.736 "thread": "nvmf_tgt_poll_group_000", 00:14:13.736 "listen_address": { 00:14:13.736 "trtype": "TCP", 00:14:13.736 "adrfam": "IPv4", 00:14:13.736 "traddr": "10.0.0.2", 00:14:13.736 "trsvcid": "4420" 00:14:13.736 }, 00:14:13.736 "peer_address": { 00:14:13.736 "trtype": "TCP", 00:14:13.736 "adrfam": "IPv4", 00:14:13.736 "traddr": "10.0.0.1", 00:14:13.736 "trsvcid": "60052" 00:14:13.736 }, 00:14:13.736 "auth": { 00:14:13.736 "state": "completed", 00:14:13.736 "digest": "sha256", 00:14:13.736 "dhgroup": "ffdhe6144" 00:14:13.736 } 00:14:13.736 } 00:14:13.736 ]' 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.736 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.995 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:14.927 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.185 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.748 00:14:15.748 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.748 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.748 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.006 { 00:14:16.006 "cntlid": 37, 00:14:16.006 "qid": 0, 00:14:16.006 "state": "enabled", 00:14:16.006 "thread": "nvmf_tgt_poll_group_000", 00:14:16.006 "listen_address": { 00:14:16.006 "trtype": "TCP", 00:14:16.006 "adrfam": "IPv4", 00:14:16.006 "traddr": "10.0.0.2", 00:14:16.006 "trsvcid": "4420" 00:14:16.006 }, 00:14:16.006 "peer_address": { 00:14:16.006 "trtype": "TCP", 00:14:16.006 "adrfam": "IPv4", 00:14:16.006 "traddr": "10.0.0.1", 00:14:16.006 "trsvcid": "55908" 00:14:16.006 }, 00:14:16.006 "auth": { 00:14:16.006 "state": "completed", 00:14:16.006 "digest": "sha256", 00:14:16.006 "dhgroup": "ffdhe6144" 00:14:16.006 } 00:14:16.006 } 00:14:16.006 ]' 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.006 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.263 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:17.195 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.760 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.323 00:14:18.323 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.323 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.323 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.581 { 00:14:18.581 "cntlid": 39, 00:14:18.581 "qid": 0, 00:14:18.581 "state": "enabled", 00:14:18.581 "thread": "nvmf_tgt_poll_group_000", 00:14:18.581 "listen_address": { 00:14:18.581 "trtype": "TCP", 00:14:18.581 "adrfam": "IPv4", 00:14:18.581 "traddr": "10.0.0.2", 00:14:18.581 "trsvcid": "4420" 00:14:18.581 }, 00:14:18.581 "peer_address": { 00:14:18.581 "trtype": "TCP", 00:14:18.581 "adrfam": "IPv4", 00:14:18.581 "traddr": "10.0.0.1", 00:14:18.581 "trsvcid": "55940" 00:14:18.581 }, 00:14:18.581 "auth": { 00:14:18.581 "state": "completed", 00:14:18.581 "digest": "sha256", 00:14:18.581 "dhgroup": "ffdhe6144" 00:14:18.581 } 00:14:18.581 } 00:14:18.581 ]' 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.581 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:18.581 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.581 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.581 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.581 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.838 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.210 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.143 00:14:21.143 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.143 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.143 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.401 { 00:14:21.401 "cntlid": 41, 00:14:21.401 "qid": 0, 00:14:21.401 "state": "enabled", 00:14:21.401 "thread": "nvmf_tgt_poll_group_000", 00:14:21.401 "listen_address": { 00:14:21.401 "trtype": "TCP", 00:14:21.401 "adrfam": "IPv4", 00:14:21.401 "traddr": "10.0.0.2", 00:14:21.401 "trsvcid": "4420" 00:14:21.401 }, 00:14:21.401 "peer_address": { 00:14:21.401 "trtype": "TCP", 00:14:21.401 "adrfam": "IPv4", 00:14:21.401 "traddr": "10.0.0.1", 00:14:21.401 "trsvcid": "55982" 00:14:21.401 }, 00:14:21.401 "auth": { 00:14:21.401 "state": "completed", 00:14:21.401 "digest": "sha256", 00:14:21.401 "dhgroup": "ffdhe8192" 00:14:21.401 } 00:14:21.401 } 00:14:21.401 ]' 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.401 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.659 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:22.592 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.850 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.815 00:14:23.815 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.815 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.815 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.073 { 00:14:24.073 "cntlid": 43, 00:14:24.073 "qid": 0, 00:14:24.073 "state": "enabled", 00:14:24.073 "thread": "nvmf_tgt_poll_group_000", 00:14:24.073 "listen_address": { 00:14:24.073 "trtype": "TCP", 00:14:24.073 "adrfam": "IPv4", 00:14:24.073 "traddr": "10.0.0.2", 00:14:24.073 "trsvcid": "4420" 00:14:24.073 }, 00:14:24.073 "peer_address": { 00:14:24.073 "trtype": "TCP", 00:14:24.073 "adrfam": "IPv4", 00:14:24.073 "traddr": "10.0.0.1", 00:14:24.073 "trsvcid": "56002" 00:14:24.073 }, 00:14:24.073 "auth": { 00:14:24.073 "state": "completed", 00:14:24.073 "digest": "sha256", 00:14:24.073 "dhgroup": "ffdhe8192" 00:14:24.073 } 00:14:24.073 } 00:14:24.073 ]' 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.073 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.331 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:14:25.263 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.263 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.263 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.263 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.520 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.520 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.520 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:25.520 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:25.777 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.778 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.711 00:14:26.711 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.711 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.711 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.711 { 00:14:26.711 "cntlid": 45, 00:14:26.711 "qid": 0, 00:14:26.711 "state": "enabled", 00:14:26.711 "thread": "nvmf_tgt_poll_group_000", 00:14:26.711 "listen_address": { 00:14:26.711 "trtype": "TCP", 00:14:26.711 "adrfam": "IPv4", 00:14:26.711 "traddr": "10.0.0.2", 00:14:26.711 "trsvcid": "4420" 00:14:26.711 }, 00:14:26.711 "peer_address": { 00:14:26.711 "trtype": "TCP", 00:14:26.711 "adrfam": "IPv4", 00:14:26.711 "traddr": "10.0.0.1", 00:14:26.711 "trsvcid": "59900" 00:14:26.711 }, 00:14:26.711 "auth": { 00:14:26.711 "state": "completed", 00:14:26.711 "digest": "sha256", 00:14:26.711 "dhgroup": "ffdhe8192" 00:14:26.711 } 00:14:26.711 } 00:14:26.711 ]' 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.711 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.969 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:26.969 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.969 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.969 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.969 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.227 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:28.159 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.417 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.350 00:14:29.350 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.350 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.350 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.608 { 00:14:29.608 "cntlid": 47, 00:14:29.608 "qid": 0, 00:14:29.608 "state": "enabled", 00:14:29.608 "thread": "nvmf_tgt_poll_group_000", 00:14:29.608 "listen_address": { 00:14:29.608 "trtype": "TCP", 00:14:29.608 "adrfam": "IPv4", 00:14:29.608 "traddr": "10.0.0.2", 00:14:29.608 "trsvcid": "4420" 00:14:29.608 }, 00:14:29.608 "peer_address": { 00:14:29.608 "trtype": "TCP", 00:14:29.608 "adrfam": "IPv4", 00:14:29.608 "traddr": "10.0.0.1", 00:14:29.608 "trsvcid": "59920" 00:14:29.608 }, 00:14:29.608 "auth": { 00:14:29.608 "state": "completed", 00:14:29.608 "digest": "sha256", 00:14:29.608 "dhgroup": "ffdhe8192" 00:14:29.608 } 00:14:29.608 } 00:14:29.608 ]' 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.608 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.173 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:31.106 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.363 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.364 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.621 00:14:31.621 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.621 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.621 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.879 { 00:14:31.879 "cntlid": 49, 00:14:31.879 "qid": 0, 00:14:31.879 "state": "enabled", 00:14:31.879 "thread": "nvmf_tgt_poll_group_000", 00:14:31.879 "listen_address": { 00:14:31.879 "trtype": "TCP", 00:14:31.879 "adrfam": "IPv4", 00:14:31.879 "traddr": "10.0.0.2", 00:14:31.879 "trsvcid": "4420" 00:14:31.879 }, 00:14:31.879 "peer_address": { 00:14:31.879 "trtype": "TCP", 00:14:31.879 "adrfam": "IPv4", 00:14:31.879 "traddr": "10.0.0.1", 00:14:31.879 "trsvcid": "59934" 00:14:31.879 }, 00:14:31.879 "auth": { 00:14:31.879 "state": "completed", 00:14:31.879 "digest": "sha384", 00:14:31.879 "dhgroup": "null" 00:14:31.879 } 00:14:31.879 } 00:14:31.879 ]' 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.879 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.137 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:33.069 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.328 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.893 00:14:33.893 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.893 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.893 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.151 { 00:14:34.151 "cntlid": 51, 00:14:34.151 "qid": 0, 00:14:34.151 "state": "enabled", 00:14:34.151 "thread": "nvmf_tgt_poll_group_000", 00:14:34.151 "listen_address": { 00:14:34.151 "trtype": "TCP", 00:14:34.151 "adrfam": "IPv4", 00:14:34.151 "traddr": "10.0.0.2", 00:14:34.151 "trsvcid": "4420" 00:14:34.151 }, 00:14:34.151 "peer_address": { 00:14:34.151 "trtype": "TCP", 00:14:34.151 "adrfam": "IPv4", 00:14:34.151 "traddr": "10.0.0.1", 00:14:34.151 "trsvcid": "59962" 00:14:34.151 }, 00:14:34.151 "auth": { 00:14:34.151 "state": "completed", 00:14:34.151 "digest": "sha384", 00:14:34.151 "dhgroup": "null" 00:14:34.151 } 00:14:34.151 } 00:14:34.151 ]' 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.151 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.409 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:35.342 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.600 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.858 00:14:35.858 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.858 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.858 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.116 { 00:14:36.116 "cntlid": 53, 00:14:36.116 "qid": 0, 00:14:36.116 "state": "enabled", 00:14:36.116 "thread": "nvmf_tgt_poll_group_000", 00:14:36.116 "listen_address": { 00:14:36.116 "trtype": "TCP", 00:14:36.116 "adrfam": "IPv4", 00:14:36.116 "traddr": "10.0.0.2", 00:14:36.116 "trsvcid": "4420" 00:14:36.116 }, 00:14:36.116 "peer_address": { 00:14:36.116 "trtype": "TCP", 00:14:36.116 "adrfam": "IPv4", 00:14:36.116 "traddr": "10.0.0.1", 00:14:36.116 "trsvcid": "53282" 00:14:36.116 }, 00:14:36.116 "auth": { 00:14:36.116 "state": "completed", 00:14:36.116 "digest": "sha384", 00:14:36.116 "dhgroup": "null" 00:14:36.116 } 00:14:36.116 } 00:14:36.116 ]' 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.116 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.373 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.373 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:36.373 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.373 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.373 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.373 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.630 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:37.564 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.821 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.079 00:14:38.079 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.079 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.079 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.337 { 00:14:38.337 "cntlid": 55, 00:14:38.337 "qid": 0, 00:14:38.337 "state": "enabled", 00:14:38.337 "thread": "nvmf_tgt_poll_group_000", 00:14:38.337 "listen_address": { 00:14:38.337 "trtype": "TCP", 00:14:38.337 "adrfam": "IPv4", 00:14:38.337 "traddr": "10.0.0.2", 00:14:38.337 "trsvcid": "4420" 00:14:38.337 }, 00:14:38.337 "peer_address": { 00:14:38.337 "trtype": "TCP", 00:14:38.337 "adrfam": "IPv4", 00:14:38.337 "traddr": "10.0.0.1", 00:14:38.337 "trsvcid": "53314" 00:14:38.337 }, 00:14:38.337 "auth": { 00:14:38.337 "state": "completed", 00:14:38.337 "digest": "sha384", 00:14:38.337 "dhgroup": "null" 00:14:38.337 } 00:14:38.337 } 00:14:38.337 ]' 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:38.337 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.594 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.594 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.594 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.875 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:39.810 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.068 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.325 00:14:40.325 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.325 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.325 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.583 { 00:14:40.583 "cntlid": 57, 00:14:40.583 "qid": 0, 00:14:40.583 "state": "enabled", 00:14:40.583 "thread": "nvmf_tgt_poll_group_000", 00:14:40.583 "listen_address": { 00:14:40.583 "trtype": "TCP", 00:14:40.583 "adrfam": "IPv4", 00:14:40.583 "traddr": "10.0.0.2", 00:14:40.583 "trsvcid": "4420" 00:14:40.583 }, 00:14:40.583 "peer_address": { 00:14:40.583 "trtype": "TCP", 00:14:40.583 "adrfam": "IPv4", 00:14:40.583 "traddr": "10.0.0.1", 00:14:40.583 "trsvcid": "53332" 00:14:40.583 }, 00:14:40.583 "auth": { 00:14:40.583 "state": "completed", 00:14:40.583 "digest": "sha384", 00:14:40.583 "dhgroup": "ffdhe2048" 00:14:40.583 } 00:14:40.583 } 00:14:40.583 ]' 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.583 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.840 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:14:41.773 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.031 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.289 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.289 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.289 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.546 00:14:42.546 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.546 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.546 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.804 { 00:14:42.804 "cntlid": 59, 00:14:42.804 "qid": 0, 00:14:42.804 "state": "enabled", 00:14:42.804 "thread": "nvmf_tgt_poll_group_000", 00:14:42.804 "listen_address": { 00:14:42.804 "trtype": "TCP", 00:14:42.804 "adrfam": "IPv4", 00:14:42.804 "traddr": "10.0.0.2", 00:14:42.804 "trsvcid": "4420" 00:14:42.804 }, 00:14:42.804 "peer_address": { 00:14:42.804 "trtype": "TCP", 00:14:42.804 "adrfam": "IPv4", 00:14:42.804 "traddr": "10.0.0.1", 00:14:42.804 "trsvcid": "53364" 00:14:42.804 }, 00:14:42.804 "auth": { 00:14:42.804 "state": "completed", 00:14:42.804 "digest": "sha384", 00:14:42.804 "dhgroup": "ffdhe2048" 00:14:42.804 } 00:14:42.804 } 00:14:42.804 ]' 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.804 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.805 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.062 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:43.994 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.559 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.816 00:14:44.816 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.816 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.816 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.074 { 00:14:45.074 "cntlid": 61, 00:14:45.074 "qid": 0, 00:14:45.074 "state": "enabled", 00:14:45.074 "thread": "nvmf_tgt_poll_group_000", 00:14:45.074 "listen_address": { 00:14:45.074 "trtype": "TCP", 00:14:45.074 "adrfam": "IPv4", 00:14:45.074 "traddr": "10.0.0.2", 00:14:45.074 "trsvcid": "4420" 00:14:45.074 }, 00:14:45.074 "peer_address": { 00:14:45.074 "trtype": "TCP", 00:14:45.074 "adrfam": "IPv4", 00:14:45.074 "traddr": "10.0.0.1", 00:14:45.074 "trsvcid": "48228" 00:14:45.074 }, 00:14:45.074 "auth": { 00:14:45.074 "state": "completed", 00:14:45.074 "digest": "sha384", 00:14:45.074 "dhgroup": "ffdhe2048" 00:14:45.074 } 00:14:45.074 } 00:14:45.074 ]' 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.074 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.331 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:46.264 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.522 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.088 00:14:47.088 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.088 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.088 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.346 { 00:14:47.346 "cntlid": 63, 00:14:47.346 "qid": 0, 00:14:47.346 "state": "enabled", 00:14:47.346 "thread": "nvmf_tgt_poll_group_000", 00:14:47.346 "listen_address": { 00:14:47.346 "trtype": "TCP", 00:14:47.346 "adrfam": "IPv4", 00:14:47.346 "traddr": "10.0.0.2", 00:14:47.346 "trsvcid": "4420" 00:14:47.346 }, 00:14:47.346 "peer_address": { 00:14:47.346 "trtype": "TCP", 00:14:47.346 "adrfam": "IPv4", 00:14:47.346 "traddr": "10.0.0.1", 00:14:47.346 "trsvcid": "48254" 00:14:47.346 }, 00:14:47.346 "auth": { 00:14:47.346 "state": "completed", 00:14:47.346 "digest": "sha384", 00:14:47.346 "dhgroup": "ffdhe2048" 00:14:47.346 } 00:14:47.346 } 00:14:47.346 ]' 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.346 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.604 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:48.536 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.794 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.795 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.052 00:14:49.310 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.310 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.310 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.310 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.310 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.310 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.310 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.568 { 00:14:49.568 "cntlid": 65, 00:14:49.568 "qid": 0, 00:14:49.568 "state": "enabled", 00:14:49.568 "thread": "nvmf_tgt_poll_group_000", 00:14:49.568 "listen_address": { 00:14:49.568 "trtype": "TCP", 00:14:49.568 "adrfam": "IPv4", 00:14:49.568 "traddr": "10.0.0.2", 00:14:49.568 "trsvcid": "4420" 00:14:49.568 }, 00:14:49.568 "peer_address": { 00:14:49.568 "trtype": "TCP", 00:14:49.568 "adrfam": "IPv4", 00:14:49.568 "traddr": "10.0.0.1", 00:14:49.568 "trsvcid": "48282" 00:14:49.568 }, 00:14:49.568 "auth": { 00:14:49.568 "state": "completed", 00:14:49.568 "digest": "sha384", 00:14:49.568 "dhgroup": "ffdhe3072" 00:14:49.568 } 00:14:49.568 } 00:14:49.568 ]' 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.568 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.826 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:50.759 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.017 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.275 00:14:51.275 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.275 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.275 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.532 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.532 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.532 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.532 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.532 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.533 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.533 { 00:14:51.533 "cntlid": 67, 00:14:51.533 "qid": 0, 00:14:51.533 "state": "enabled", 00:14:51.533 "thread": "nvmf_tgt_poll_group_000", 00:14:51.533 "listen_address": { 00:14:51.533 "trtype": "TCP", 00:14:51.533 "adrfam": "IPv4", 00:14:51.533 "traddr": "10.0.0.2", 00:14:51.533 "trsvcid": "4420" 00:14:51.533 }, 00:14:51.533 "peer_address": { 00:14:51.533 "trtype": "TCP", 00:14:51.533 "adrfam": "IPv4", 00:14:51.533 "traddr": "10.0.0.1", 00:14:51.533 "trsvcid": "48306" 00:14:51.533 }, 00:14:51.533 "auth": { 00:14:51.533 "state": "completed", 00:14:51.533 "digest": "sha384", 00:14:51.533 "dhgroup": "ffdhe3072" 00:14:51.533 } 00:14:51.533 } 00:14:51.533 ]' 00:14:51.533 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.790 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.790 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.790 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:51.790 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.790 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.790 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.790 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.047 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:52.981 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.239 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.497 00:14:53.497 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.497 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.497 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.760 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.760 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.760 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.760 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.760 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.760 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.760 { 00:14:53.760 "cntlid": 69, 00:14:53.760 "qid": 0, 00:14:53.760 "state": "enabled", 00:14:53.760 "thread": "nvmf_tgt_poll_group_000", 00:14:53.760 "listen_address": { 00:14:53.760 "trtype": "TCP", 00:14:53.760 "adrfam": "IPv4", 00:14:53.760 "traddr": "10.0.0.2", 00:14:53.760 "trsvcid": "4420" 00:14:53.760 }, 00:14:53.760 "peer_address": { 00:14:53.760 "trtype": "TCP", 00:14:53.760 "adrfam": "IPv4", 00:14:53.760 "traddr": "10.0.0.1", 00:14:53.760 "trsvcid": "48326" 00:14:53.760 }, 00:14:53.760 "auth": { 00:14:53.760 "state": "completed", 00:14:53.760 "digest": "sha384", 00:14:53.760 "dhgroup": "ffdhe3072" 00:14:53.760 } 00:14:53.760 } 00:14:53.760 ]' 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.068 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.333 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:55.268 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.525 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.782 00:14:55.782 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.782 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.782 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.039 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.039 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.039 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.039 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.039 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.039 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.039 { 00:14:56.039 "cntlid": 71, 00:14:56.039 "qid": 0, 00:14:56.039 "state": "enabled", 00:14:56.039 "thread": "nvmf_tgt_poll_group_000", 00:14:56.039 "listen_address": { 00:14:56.039 "trtype": "TCP", 00:14:56.039 "adrfam": "IPv4", 00:14:56.039 "traddr": "10.0.0.2", 00:14:56.039 "trsvcid": "4420" 00:14:56.039 }, 00:14:56.039 "peer_address": { 00:14:56.039 "trtype": "TCP", 00:14:56.039 "adrfam": "IPv4", 00:14:56.039 "traddr": "10.0.0.1", 00:14:56.039 "trsvcid": "34654" 00:14:56.039 }, 00:14:56.039 "auth": { 00:14:56.039 "state": "completed", 00:14:56.039 "digest": "sha384", 00:14:56.039 "dhgroup": "ffdhe3072" 00:14:56.039 } 00:14:56.039 } 00:14:56.039 ]' 00:14:56.039 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.297 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.297 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.297 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.297 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.297 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.297 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.297 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.554 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:57.489 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.747 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.311 00:14:58.311 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.311 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.311 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.569 { 00:14:58.569 "cntlid": 73, 00:14:58.569 "qid": 0, 00:14:58.569 "state": "enabled", 00:14:58.569 "thread": "nvmf_tgt_poll_group_000", 00:14:58.569 "listen_address": { 00:14:58.569 "trtype": "TCP", 00:14:58.569 "adrfam": "IPv4", 00:14:58.569 "traddr": "10.0.0.2", 00:14:58.569 "trsvcid": "4420" 00:14:58.569 }, 00:14:58.569 "peer_address": { 00:14:58.569 "trtype": "TCP", 00:14:58.569 "adrfam": "IPv4", 00:14:58.569 "traddr": "10.0.0.1", 00:14:58.569 "trsvcid": "34680" 00:14:58.569 }, 00:14:58.569 "auth": { 00:14:58.569 "state": "completed", 00:14:58.569 "digest": "sha384", 00:14:58.569 "dhgroup": "ffdhe4096" 00:14:58.569 } 00:14:58.569 } 00:14:58.569 ]' 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.569 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.569 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.569 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.569 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.569 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.569 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.827 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:59.758 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.014 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.015 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.577 00:15:00.577 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.577 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.577 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.833 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.833 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.833 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.833 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.834 { 00:15:00.834 "cntlid": 75, 00:15:00.834 "qid": 0, 00:15:00.834 "state": "enabled", 00:15:00.834 "thread": "nvmf_tgt_poll_group_000", 00:15:00.834 "listen_address": { 00:15:00.834 "trtype": "TCP", 00:15:00.834 "adrfam": "IPv4", 00:15:00.834 "traddr": "10.0.0.2", 00:15:00.834 "trsvcid": "4420" 00:15:00.834 }, 00:15:00.834 "peer_address": { 00:15:00.834 "trtype": "TCP", 00:15:00.834 "adrfam": "IPv4", 00:15:00.834 "traddr": "10.0.0.1", 00:15:00.834 "trsvcid": "34698" 00:15:00.834 }, 00:15:00.834 "auth": { 00:15:00.834 "state": "completed", 00:15:00.834 "digest": "sha384", 00:15:00.834 "dhgroup": "ffdhe4096" 00:15:00.834 } 00:15:00.834 } 00:15:00.834 ]' 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.834 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.091 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:02.023 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:02.279 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.280 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.843 00:15:02.843 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.843 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.843 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.100 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.100 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.100 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.100 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.100 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.100 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.100 { 00:15:03.100 "cntlid": 77, 00:15:03.100 "qid": 0, 00:15:03.100 "state": "enabled", 00:15:03.100 "thread": "nvmf_tgt_poll_group_000", 00:15:03.100 "listen_address": { 00:15:03.100 "trtype": "TCP", 00:15:03.100 "adrfam": "IPv4", 00:15:03.100 "traddr": "10.0.0.2", 00:15:03.100 "trsvcid": "4420" 00:15:03.100 }, 00:15:03.100 "peer_address": { 00:15:03.100 "trtype": "TCP", 00:15:03.100 "adrfam": "IPv4", 00:15:03.100 "traddr": "10.0.0.1", 00:15:03.100 "trsvcid": "34732" 00:15:03.100 }, 00:15:03.100 "auth": { 00:15:03.100 "state": "completed", 00:15:03.100 "digest": "sha384", 00:15:03.100 "dhgroup": "ffdhe4096" 00:15:03.100 } 00:15:03.100 } 00:15:03.100 ]' 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.101 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.357 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:04.287 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.544 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.105 00:15:05.105 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.105 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.105 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.361 { 00:15:05.361 "cntlid": 79, 00:15:05.361 "qid": 0, 00:15:05.361 "state": "enabled", 00:15:05.361 "thread": "nvmf_tgt_poll_group_000", 00:15:05.361 "listen_address": { 00:15:05.361 "trtype": "TCP", 00:15:05.361 "adrfam": "IPv4", 00:15:05.361 "traddr": "10.0.0.2", 00:15:05.361 "trsvcid": "4420" 00:15:05.361 }, 00:15:05.361 "peer_address": { 00:15:05.361 "trtype": "TCP", 00:15:05.361 "adrfam": "IPv4", 00:15:05.361 "traddr": "10.0.0.1", 00:15:05.361 "trsvcid": "57970" 00:15:05.361 }, 00:15:05.361 "auth": { 00:15:05.361 "state": "completed", 00:15:05.361 "digest": "sha384", 00:15:05.361 "dhgroup": "ffdhe4096" 00:15:05.361 } 00:15:05.361 } 00:15:05.361 ]' 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.361 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.617 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:06.550 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.807 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.372 00:15:07.372 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.372 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.372 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.629 { 00:15:07.629 "cntlid": 81, 00:15:07.629 "qid": 0, 00:15:07.629 "state": "enabled", 00:15:07.629 "thread": "nvmf_tgt_poll_group_000", 00:15:07.629 "listen_address": { 00:15:07.629 "trtype": "TCP", 00:15:07.629 "adrfam": "IPv4", 00:15:07.629 "traddr": "10.0.0.2", 00:15:07.629 "trsvcid": "4420" 00:15:07.629 }, 00:15:07.629 "peer_address": { 00:15:07.629 "trtype": "TCP", 00:15:07.629 "adrfam": "IPv4", 00:15:07.629 "traddr": "10.0.0.1", 00:15:07.629 "trsvcid": "57998" 00:15:07.629 }, 00:15:07.629 "auth": { 00:15:07.629 "state": "completed", 00:15:07.629 "digest": "sha384", 00:15:07.629 "dhgroup": "ffdhe6144" 00:15:07.629 } 00:15:07.629 } 00:15:07.629 ]' 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.629 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.887 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.887 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.887 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.144 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:15:09.138 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.138 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.138 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.138 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.138 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.139 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.139 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:09.139 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:09.414 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:09.414 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.414 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.414 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:09.414 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.415 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.415 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.415 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.415 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.415 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.415 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.415 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.977 00:15:09.977 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.977 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.977 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.233 { 00:15:10.233 "cntlid": 83, 00:15:10.233 "qid": 0, 00:15:10.233 "state": "enabled", 00:15:10.233 "thread": "nvmf_tgt_poll_group_000", 00:15:10.233 "listen_address": { 00:15:10.233 "trtype": "TCP", 00:15:10.233 "adrfam": "IPv4", 00:15:10.233 "traddr": "10.0.0.2", 00:15:10.233 "trsvcid": "4420" 00:15:10.233 }, 00:15:10.233 "peer_address": { 00:15:10.233 "trtype": "TCP", 00:15:10.233 "adrfam": "IPv4", 00:15:10.233 "traddr": "10.0.0.1", 00:15:10.233 "trsvcid": "58032" 00:15:10.233 }, 00:15:10.233 "auth": { 00:15:10.233 "state": "completed", 00:15:10.233 "digest": "sha384", 00:15:10.233 "dhgroup": "ffdhe6144" 00:15:10.233 } 00:15:10.233 } 00:15:10.233 ]' 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.233 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.234 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:10.234 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.234 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.234 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.234 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.490 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.861 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.862 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.862 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.425 00:15:12.425 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.425 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.425 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.683 { 00:15:12.683 "cntlid": 85, 00:15:12.683 "qid": 0, 00:15:12.683 "state": "enabled", 00:15:12.683 "thread": "nvmf_tgt_poll_group_000", 00:15:12.683 "listen_address": { 00:15:12.683 "trtype": "TCP", 00:15:12.683 "adrfam": "IPv4", 00:15:12.683 "traddr": "10.0.0.2", 00:15:12.683 "trsvcid": "4420" 00:15:12.683 }, 00:15:12.683 "peer_address": { 00:15:12.683 "trtype": "TCP", 00:15:12.683 "adrfam": "IPv4", 00:15:12.683 "traddr": "10.0.0.1", 00:15:12.683 "trsvcid": "58060" 00:15:12.683 }, 00:15:12.683 "auth": { 00:15:12.683 "state": "completed", 00:15:12.683 "digest": "sha384", 00:15:12.683 "dhgroup": "ffdhe6144" 00:15:12.683 } 00:15:12.683 } 00:15:12.683 ]' 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.683 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.248 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.180 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.745 00:15:14.745 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.745 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.745 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.003 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.003 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.003 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.003 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.003 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.003 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.003 { 00:15:15.003 "cntlid": 87, 00:15:15.003 "qid": 0, 00:15:15.003 "state": "enabled", 00:15:15.003 "thread": "nvmf_tgt_poll_group_000", 00:15:15.003 "listen_address": { 00:15:15.003 "trtype": "TCP", 00:15:15.003 "adrfam": "IPv4", 00:15:15.003 "traddr": "10.0.0.2", 00:15:15.003 "trsvcid": "4420" 00:15:15.003 }, 00:15:15.003 "peer_address": { 00:15:15.003 "trtype": "TCP", 00:15:15.003 "adrfam": "IPv4", 00:15:15.003 "traddr": "10.0.0.1", 00:15:15.003 "trsvcid": "44356" 00:15:15.003 }, 00:15:15.003 "auth": { 00:15:15.003 "state": "completed", 00:15:15.003 "digest": "sha384", 00:15:15.003 "dhgroup": "ffdhe6144" 00:15:15.003 } 00:15:15.003 } 00:15:15.003 ]' 00:15:15.003 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.262 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.262 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.262 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.262 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.262 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.262 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.262 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.520 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:16.453 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.711 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.644 00:15:17.644 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.644 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.644 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.902 { 00:15:17.902 "cntlid": 89, 00:15:17.902 "qid": 0, 00:15:17.902 "state": "enabled", 00:15:17.902 "thread": "nvmf_tgt_poll_group_000", 00:15:17.902 "listen_address": { 00:15:17.902 "trtype": "TCP", 00:15:17.902 "adrfam": "IPv4", 00:15:17.902 "traddr": "10.0.0.2", 00:15:17.902 "trsvcid": "4420" 00:15:17.902 }, 00:15:17.902 "peer_address": { 00:15:17.902 "trtype": "TCP", 00:15:17.902 "adrfam": "IPv4", 00:15:17.902 "traddr": "10.0.0.1", 00:15:17.902 "trsvcid": "44378" 00:15:17.902 }, 00:15:17.902 "auth": { 00:15:17.902 "state": "completed", 00:15:17.902 "digest": "sha384", 00:15:17.902 "dhgroup": "ffdhe8192" 00:15:17.902 } 00:15:17.902 } 00:15:17.902 ]' 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.902 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.160 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:19.092 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.350 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.283 00:15:20.283 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.283 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.283 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.541 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.541 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.541 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.541 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.541 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.541 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.541 { 00:15:20.541 "cntlid": 91, 00:15:20.541 "qid": 0, 00:15:20.541 "state": "enabled", 00:15:20.541 "thread": "nvmf_tgt_poll_group_000", 00:15:20.541 "listen_address": { 00:15:20.541 "trtype": "TCP", 00:15:20.541 "adrfam": "IPv4", 00:15:20.541 "traddr": "10.0.0.2", 00:15:20.541 "trsvcid": "4420" 00:15:20.541 }, 00:15:20.541 "peer_address": { 00:15:20.541 "trtype": "TCP", 00:15:20.541 "adrfam": "IPv4", 00:15:20.541 "traddr": "10.0.0.1", 00:15:20.541 "trsvcid": "44400" 00:15:20.541 }, 00:15:20.541 "auth": { 00:15:20.541 "state": "completed", 00:15:20.541 "digest": "sha384", 00:15:20.541 "dhgroup": "ffdhe8192" 00:15:20.541 } 00:15:20.541 } 00:15:20.541 ]' 00:15:20.541 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.541 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.541 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.541 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.541 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.799 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.799 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.799 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.058 20:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:21.991 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.249 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.182 00:15:23.182 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.182 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.182 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.182 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.182 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.182 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.182 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.440 { 00:15:23.440 "cntlid": 93, 00:15:23.440 "qid": 0, 00:15:23.440 "state": "enabled", 00:15:23.440 "thread": "nvmf_tgt_poll_group_000", 00:15:23.440 "listen_address": { 00:15:23.440 "trtype": "TCP", 00:15:23.440 "adrfam": "IPv4", 00:15:23.440 "traddr": "10.0.0.2", 00:15:23.440 "trsvcid": "4420" 00:15:23.440 }, 00:15:23.440 "peer_address": { 00:15:23.440 "trtype": "TCP", 00:15:23.440 "adrfam": "IPv4", 00:15:23.440 "traddr": "10.0.0.1", 00:15:23.440 "trsvcid": "44412" 00:15:23.440 }, 00:15:23.440 "auth": { 00:15:23.440 "state": "completed", 00:15:23.440 "digest": "sha384", 00:15:23.440 "dhgroup": "ffdhe8192" 00:15:23.440 } 00:15:23.440 } 00:15:23.440 ]' 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.440 20:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.698 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:24.680 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:24.937 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:24.937 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.937 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.937 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:24.937 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.938 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.938 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:24.938 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.938 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.938 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.938 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.938 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.871 00:15:25.871 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.871 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.871 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.129 { 00:15:26.129 "cntlid": 95, 00:15:26.129 "qid": 0, 00:15:26.129 "state": "enabled", 00:15:26.129 "thread": "nvmf_tgt_poll_group_000", 00:15:26.129 "listen_address": { 00:15:26.129 "trtype": "TCP", 00:15:26.129 "adrfam": "IPv4", 00:15:26.129 "traddr": "10.0.0.2", 00:15:26.129 "trsvcid": "4420" 00:15:26.129 }, 00:15:26.129 "peer_address": { 00:15:26.129 "trtype": "TCP", 00:15:26.129 "adrfam": "IPv4", 00:15:26.129 "traddr": "10.0.0.1", 00:15:26.129 "trsvcid": "35538" 00:15:26.129 }, 00:15:26.129 "auth": { 00:15:26.129 "state": "completed", 00:15:26.129 "digest": "sha384", 00:15:26.129 "dhgroup": "ffdhe8192" 00:15:26.129 } 00:15:26.129 } 00:15:26.129 ]' 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.129 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.130 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.130 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.130 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.388 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.322 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.580 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.154 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.154 { 00:15:28.154 "cntlid": 97, 00:15:28.154 "qid": 0, 00:15:28.154 "state": "enabled", 00:15:28.154 "thread": "nvmf_tgt_poll_group_000", 00:15:28.154 "listen_address": { 00:15:28.154 "trtype": "TCP", 00:15:28.154 "adrfam": "IPv4", 00:15:28.154 "traddr": "10.0.0.2", 00:15:28.154 "trsvcid": "4420" 00:15:28.154 }, 00:15:28.154 "peer_address": { 00:15:28.154 "trtype": "TCP", 00:15:28.154 "adrfam": "IPv4", 00:15:28.154 "traddr": "10.0.0.1", 00:15:28.154 "trsvcid": "35570" 00:15:28.154 }, 00:15:28.154 "auth": { 00:15:28.154 "state": "completed", 00:15:28.154 "digest": "sha512", 00:15:28.154 "dhgroup": "null" 00:15:28.154 } 00:15:28.154 } 00:15:28.154 ]' 00:15:28.154 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.413 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.413 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.413 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:28.413 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.413 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.413 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.413 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.671 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.603 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.860 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.118 00:15:30.118 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.118 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.118 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.376 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.376 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.376 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.376 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.646 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.646 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.646 { 00:15:30.646 "cntlid": 99, 00:15:30.646 "qid": 0, 00:15:30.646 "state": "enabled", 00:15:30.646 "thread": "nvmf_tgt_poll_group_000", 00:15:30.646 "listen_address": { 00:15:30.646 "trtype": "TCP", 00:15:30.646 "adrfam": "IPv4", 00:15:30.646 "traddr": "10.0.0.2", 00:15:30.646 "trsvcid": "4420" 00:15:30.646 }, 00:15:30.646 "peer_address": { 00:15:30.646 "trtype": "TCP", 00:15:30.646 "adrfam": "IPv4", 00:15:30.646 "traddr": "10.0.0.1", 00:15:30.646 "trsvcid": "35608" 00:15:30.646 }, 00:15:30.646 "auth": { 00:15:30.646 "state": "completed", 00:15:30.646 "digest": "sha512", 00:15:30.646 "dhgroup": "null" 00:15:30.646 } 00:15:30.646 } 00:15:30.646 ]' 00:15:30.646 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.646 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.646 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.646 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.646 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.646 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.646 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.646 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.910 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:31.841 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.100 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.357 00:15:32.357 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.357 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.357 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.615 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.615 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.615 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.615 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.615 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.615 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.615 { 00:15:32.615 "cntlid": 101, 00:15:32.615 "qid": 0, 00:15:32.615 "state": "enabled", 00:15:32.615 "thread": "nvmf_tgt_poll_group_000", 00:15:32.615 "listen_address": { 00:15:32.615 "trtype": "TCP", 00:15:32.615 "adrfam": "IPv4", 00:15:32.615 "traddr": "10.0.0.2", 00:15:32.615 "trsvcid": "4420" 00:15:32.615 }, 00:15:32.615 "peer_address": { 00:15:32.615 "trtype": "TCP", 00:15:32.615 "adrfam": "IPv4", 00:15:32.615 "traddr": "10.0.0.1", 00:15:32.615 "trsvcid": "35638" 00:15:32.615 }, 00:15:32.615 "auth": { 00:15:32.615 "state": "completed", 00:15:32.615 "digest": "sha512", 00:15:32.615 "dhgroup": "null" 00:15:32.615 } 00:15:32.615 } 00:15:32.615 ]' 00:15:32.615 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.872 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.872 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.872 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.872 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.872 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.872 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.873 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.130 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:34.064 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:34.322 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:34.322 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.322 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:34.322 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:34.322 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.322 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.322 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:34.323 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.323 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.323 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.323 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.323 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.580 00:15:34.580 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.580 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.580 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.838 { 00:15:34.838 "cntlid": 103, 00:15:34.838 "qid": 0, 00:15:34.838 "state": "enabled", 00:15:34.838 "thread": "nvmf_tgt_poll_group_000", 00:15:34.838 "listen_address": { 00:15:34.838 "trtype": "TCP", 00:15:34.838 "adrfam": "IPv4", 00:15:34.838 "traddr": "10.0.0.2", 00:15:34.838 "trsvcid": "4420" 00:15:34.838 }, 00:15:34.838 "peer_address": { 00:15:34.838 "trtype": "TCP", 00:15:34.838 "adrfam": "IPv4", 00:15:34.838 "traddr": "10.0.0.1", 00:15:34.838 "trsvcid": "57912" 00:15:34.838 }, 00:15:34.838 "auth": { 00:15:34.838 "state": "completed", 00:15:34.838 "digest": "sha512", 00:15:34.838 "dhgroup": "null" 00:15:34.838 } 00:15:34.838 } 00:15:34.838 ]' 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.838 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.096 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:15:36.029 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.029 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.029 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.029 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:36.286 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.287 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.287 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.287 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.544 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.544 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.544 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.801 00:15:36.801 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.801 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.801 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.059 { 00:15:37.059 "cntlid": 105, 00:15:37.059 "qid": 0, 00:15:37.059 "state": "enabled", 00:15:37.059 "thread": "nvmf_tgt_poll_group_000", 00:15:37.059 "listen_address": { 00:15:37.059 "trtype": "TCP", 00:15:37.059 "adrfam": "IPv4", 00:15:37.059 "traddr": "10.0.0.2", 00:15:37.059 "trsvcid": "4420" 00:15:37.059 }, 00:15:37.059 "peer_address": { 00:15:37.059 "trtype": "TCP", 00:15:37.059 "adrfam": "IPv4", 00:15:37.059 "traddr": "10.0.0.1", 00:15:37.059 "trsvcid": "57942" 00:15:37.059 }, 00:15:37.059 "auth": { 00:15:37.059 "state": "completed", 00:15:37.059 "digest": "sha512", 00:15:37.059 "dhgroup": "ffdhe2048" 00:15:37.059 } 00:15:37.059 } 00:15:37.059 ]' 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.059 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.316 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:15:38.249 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.507 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.507 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.507 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.507 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.507 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.507 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.507 20:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.765 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.023 00:15:39.023 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.023 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.023 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.280 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.280 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.280 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.280 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.280 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.280 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.281 { 00:15:39.281 "cntlid": 107, 00:15:39.281 "qid": 0, 00:15:39.281 "state": "enabled", 00:15:39.281 "thread": "nvmf_tgt_poll_group_000", 00:15:39.281 "listen_address": { 00:15:39.281 "trtype": "TCP", 00:15:39.281 "adrfam": "IPv4", 00:15:39.281 "traddr": "10.0.0.2", 00:15:39.281 "trsvcid": "4420" 00:15:39.281 }, 00:15:39.281 "peer_address": { 00:15:39.281 "trtype": "TCP", 00:15:39.281 "adrfam": "IPv4", 00:15:39.281 "traddr": "10.0.0.1", 00:15:39.281 "trsvcid": "57976" 00:15:39.281 }, 00:15:39.281 "auth": { 00:15:39.281 "state": "completed", 00:15:39.281 "digest": "sha512", 00:15:39.281 "dhgroup": "ffdhe2048" 00:15:39.281 } 00:15:39.281 } 00:15:39.281 ]' 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.281 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.567 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.500 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.758 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.323 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.323 { 00:15:41.323 "cntlid": 109, 00:15:41.323 "qid": 0, 00:15:41.323 "state": "enabled", 00:15:41.323 "thread": "nvmf_tgt_poll_group_000", 00:15:41.323 "listen_address": { 00:15:41.323 "trtype": "TCP", 00:15:41.323 "adrfam": "IPv4", 00:15:41.323 "traddr": "10.0.0.2", 00:15:41.323 "trsvcid": "4420" 00:15:41.323 }, 00:15:41.323 "peer_address": { 00:15:41.323 "trtype": "TCP", 00:15:41.323 "adrfam": "IPv4", 00:15:41.323 "traddr": "10.0.0.1", 00:15:41.323 "trsvcid": "58006" 00:15:41.323 }, 00:15:41.323 "auth": { 00:15:41.323 "state": "completed", 00:15:41.323 "digest": "sha512", 00:15:41.323 "dhgroup": "ffdhe2048" 00:15:41.323 } 00:15:41.323 } 00:15:41.323 ]' 00:15:41.323 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.579 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.579 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.579 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.579 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.579 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.579 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.579 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.837 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.768 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.025 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.026 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.283 00:15:43.283 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.283 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.283 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.540 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.540 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.540 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.540 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.540 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.540 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.540 { 00:15:43.540 "cntlid": 111, 00:15:43.540 "qid": 0, 00:15:43.540 "state": "enabled", 00:15:43.540 "thread": "nvmf_tgt_poll_group_000", 00:15:43.540 "listen_address": { 00:15:43.540 "trtype": "TCP", 00:15:43.540 "adrfam": "IPv4", 00:15:43.540 "traddr": "10.0.0.2", 00:15:43.540 "trsvcid": "4420" 00:15:43.540 }, 00:15:43.540 "peer_address": { 00:15:43.540 "trtype": "TCP", 00:15:43.540 "adrfam": "IPv4", 00:15:43.540 "traddr": "10.0.0.1", 00:15:43.540 "trsvcid": "58052" 00:15:43.540 }, 00:15:43.540 "auth": { 00:15:43.540 "state": "completed", 00:15:43.540 "digest": "sha512", 00:15:43.541 "dhgroup": "ffdhe2048" 00:15:43.541 } 00:15:43.541 } 00:15:43.541 ]' 00:15:43.541 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.798 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.798 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.798 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.798 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.798 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.798 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.798 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.056 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.989 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.247 20:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.504 00:15:45.762 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.762 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.762 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.019 { 00:15:46.019 "cntlid": 113, 00:15:46.019 "qid": 0, 00:15:46.019 "state": "enabled", 00:15:46.019 "thread": "nvmf_tgt_poll_group_000", 00:15:46.019 "listen_address": { 00:15:46.019 "trtype": "TCP", 00:15:46.019 "adrfam": "IPv4", 00:15:46.019 "traddr": "10.0.0.2", 00:15:46.019 "trsvcid": "4420" 00:15:46.019 }, 00:15:46.019 "peer_address": { 00:15:46.019 "trtype": "TCP", 00:15:46.019 "adrfam": "IPv4", 00:15:46.019 "traddr": "10.0.0.1", 00:15:46.019 "trsvcid": "55286" 00:15:46.019 }, 00:15:46.019 "auth": { 00:15:46.019 "state": "completed", 00:15:46.019 "digest": "sha512", 00:15:46.019 "dhgroup": "ffdhe3072" 00:15:46.019 } 00:15:46.019 } 00:15:46.019 ]' 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.019 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.277 20:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.210 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.469 20:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.034 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.034 { 00:15:48.034 "cntlid": 115, 00:15:48.034 "qid": 0, 00:15:48.034 "state": "enabled", 00:15:48.034 "thread": "nvmf_tgt_poll_group_000", 00:15:48.034 "listen_address": { 00:15:48.034 "trtype": "TCP", 00:15:48.034 "adrfam": "IPv4", 00:15:48.034 "traddr": "10.0.0.2", 00:15:48.034 "trsvcid": "4420" 00:15:48.034 }, 00:15:48.034 "peer_address": { 00:15:48.034 "trtype": "TCP", 00:15:48.034 "adrfam": "IPv4", 00:15:48.034 "traddr": "10.0.0.1", 00:15:48.034 "trsvcid": "55314" 00:15:48.034 }, 00:15:48.034 "auth": { 00:15:48.034 "state": "completed", 00:15:48.034 "digest": "sha512", 00:15:48.034 "dhgroup": "ffdhe3072" 00:15:48.034 } 00:15:48.034 } 00:15:48.034 ]' 00:15:48.034 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.291 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.291 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.291 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.291 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.291 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.291 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.291 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.549 20:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.481 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.739 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.305 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.305 { 00:15:50.305 "cntlid": 117, 00:15:50.305 "qid": 0, 00:15:50.305 "state": "enabled", 00:15:50.305 "thread": "nvmf_tgt_poll_group_000", 00:15:50.305 "listen_address": { 00:15:50.305 "trtype": "TCP", 00:15:50.305 "adrfam": "IPv4", 00:15:50.305 "traddr": "10.0.0.2", 00:15:50.305 "trsvcid": "4420" 00:15:50.305 }, 00:15:50.305 "peer_address": { 00:15:50.305 "trtype": "TCP", 00:15:50.305 "adrfam": "IPv4", 00:15:50.305 "traddr": "10.0.0.1", 00:15:50.305 "trsvcid": "55356" 00:15:50.305 }, 00:15:50.305 "auth": { 00:15:50.305 "state": "completed", 00:15:50.305 "digest": "sha512", 00:15:50.305 "dhgroup": "ffdhe3072" 00:15:50.305 } 00:15:50.305 } 00:15:50.305 ]' 00:15:50.305 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.562 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.562 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.562 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.562 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.562 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.562 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.562 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.818 20:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.753 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.011 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.269 00:15:52.527 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.527 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.527 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.784 { 00:15:52.784 "cntlid": 119, 00:15:52.784 "qid": 0, 00:15:52.784 "state": "enabled", 00:15:52.784 "thread": "nvmf_tgt_poll_group_000", 00:15:52.784 "listen_address": { 00:15:52.784 "trtype": "TCP", 00:15:52.784 "adrfam": "IPv4", 00:15:52.784 "traddr": "10.0.0.2", 00:15:52.784 "trsvcid": "4420" 00:15:52.784 }, 00:15:52.784 "peer_address": { 00:15:52.784 "trtype": "TCP", 00:15:52.784 "adrfam": "IPv4", 00:15:52.784 "traddr": "10.0.0.1", 00:15:52.784 "trsvcid": "55382" 00:15:52.784 }, 00:15:52.784 "auth": { 00:15:52.784 "state": "completed", 00:15:52.784 "digest": "sha512", 00:15:52.784 "dhgroup": "ffdhe3072" 00:15:52.784 } 00:15:52.784 } 00:15:52.784 ]' 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.784 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.043 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.007 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.008 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.265 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.831 00:15:54.831 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.831 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.831 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.099 { 00:15:55.099 "cntlid": 121, 00:15:55.099 "qid": 0, 00:15:55.099 "state": "enabled", 00:15:55.099 "thread": "nvmf_tgt_poll_group_000", 00:15:55.099 "listen_address": { 00:15:55.099 "trtype": "TCP", 00:15:55.099 "adrfam": "IPv4", 00:15:55.099 "traddr": "10.0.0.2", 00:15:55.099 "trsvcid": "4420" 00:15:55.099 }, 00:15:55.099 "peer_address": { 00:15:55.099 "trtype": "TCP", 00:15:55.099 "adrfam": "IPv4", 00:15:55.099 "traddr": "10.0.0.1", 00:15:55.099 "trsvcid": "46306" 00:15:55.099 }, 00:15:55.099 "auth": { 00:15:55.099 "state": "completed", 00:15:55.099 "digest": "sha512", 00:15:55.099 "dhgroup": "ffdhe4096" 00:15:55.099 } 00:15:55.099 } 00:15:55.099 ]' 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.099 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.360 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:15:56.294 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.295 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.295 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.295 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.295 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.295 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.295 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:56.295 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.553 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.119 00:15:57.119 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.119 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.119 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.376 { 00:15:57.376 "cntlid": 123, 00:15:57.376 "qid": 0, 00:15:57.376 "state": "enabled", 00:15:57.376 "thread": "nvmf_tgt_poll_group_000", 00:15:57.376 "listen_address": { 00:15:57.376 "trtype": "TCP", 00:15:57.376 "adrfam": "IPv4", 00:15:57.376 "traddr": "10.0.0.2", 00:15:57.376 "trsvcid": "4420" 00:15:57.376 }, 00:15:57.376 "peer_address": { 00:15:57.376 "trtype": "TCP", 00:15:57.376 "adrfam": "IPv4", 00:15:57.376 "traddr": "10.0.0.1", 00:15:57.376 "trsvcid": "46340" 00:15:57.376 }, 00:15:57.376 "auth": { 00:15:57.376 "state": "completed", 00:15:57.376 "digest": "sha512", 00:15:57.376 "dhgroup": "ffdhe4096" 00:15:57.376 } 00:15:57.376 } 00:15:57.376 ]' 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.376 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.634 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.568 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.825 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:58.825 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.826 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.391 00:15:59.391 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.391 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.391 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.649 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.649 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.649 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.649 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.649 { 00:15:59.649 "cntlid": 125, 00:15:59.649 "qid": 0, 00:15:59.649 "state": "enabled", 00:15:59.649 "thread": "nvmf_tgt_poll_group_000", 00:15:59.649 "listen_address": { 00:15:59.649 "trtype": "TCP", 00:15:59.649 "adrfam": "IPv4", 00:15:59.649 "traddr": "10.0.0.2", 00:15:59.649 "trsvcid": "4420" 00:15:59.649 }, 00:15:59.649 "peer_address": { 00:15:59.649 "trtype": "TCP", 00:15:59.649 "adrfam": "IPv4", 00:15:59.649 "traddr": "10.0.0.1", 00:15:59.649 "trsvcid": "46360" 00:15:59.649 }, 00:15:59.649 "auth": { 00:15:59.649 "state": "completed", 00:15:59.649 "digest": "sha512", 00:15:59.649 "dhgroup": "ffdhe4096" 00:15:59.649 } 00:15:59.649 } 00:15:59.649 ]' 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.649 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.907 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.840 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.098 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.663 00:16:01.663 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.663 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.663 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.921 { 00:16:01.921 "cntlid": 127, 00:16:01.921 "qid": 0, 00:16:01.921 "state": "enabled", 00:16:01.921 "thread": "nvmf_tgt_poll_group_000", 00:16:01.921 "listen_address": { 00:16:01.921 "trtype": "TCP", 00:16:01.921 "adrfam": "IPv4", 00:16:01.921 "traddr": "10.0.0.2", 00:16:01.921 "trsvcid": "4420" 00:16:01.921 }, 00:16:01.921 "peer_address": { 00:16:01.921 "trtype": "TCP", 00:16:01.921 "adrfam": "IPv4", 00:16:01.921 "traddr": "10.0.0.1", 00:16:01.921 "trsvcid": "46380" 00:16:01.921 }, 00:16:01.921 "auth": { 00:16:01.921 "state": "completed", 00:16:01.921 "digest": "sha512", 00:16:01.921 "dhgroup": "ffdhe4096" 00:16:01.921 } 00:16:01.921 } 00:16:01.921 ]' 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.921 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.179 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.111 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.368 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.933 00:16:03.933 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.933 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.933 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.190 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.190 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.190 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.190 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.190 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.190 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.190 { 00:16:04.190 "cntlid": 129, 00:16:04.190 "qid": 0, 00:16:04.190 "state": "enabled", 00:16:04.190 "thread": "nvmf_tgt_poll_group_000", 00:16:04.190 "listen_address": { 00:16:04.190 "trtype": "TCP", 00:16:04.190 "adrfam": "IPv4", 00:16:04.190 "traddr": "10.0.0.2", 00:16:04.190 "trsvcid": "4420" 00:16:04.190 }, 00:16:04.190 "peer_address": { 00:16:04.190 "trtype": "TCP", 00:16:04.190 "adrfam": "IPv4", 00:16:04.190 "traddr": "10.0.0.1", 00:16:04.190 "trsvcid": "46392" 00:16:04.190 }, 00:16:04.190 "auth": { 00:16:04.190 "state": "completed", 00:16:04.190 "digest": "sha512", 00:16:04.190 "dhgroup": "ffdhe6144" 00:16:04.190 } 00:16:04.190 } 00:16:04.190 ]' 00:16:04.190 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.447 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.447 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.447 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.447 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.447 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.447 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.447 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.705 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:05.636 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.893 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.457 00:16:06.457 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.457 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.457 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.715 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.715 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.715 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.715 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.715 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.715 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.715 { 00:16:06.715 "cntlid": 131, 00:16:06.715 "qid": 0, 00:16:06.715 "state": "enabled", 00:16:06.715 "thread": "nvmf_tgt_poll_group_000", 00:16:06.715 "listen_address": { 00:16:06.715 "trtype": "TCP", 00:16:06.715 "adrfam": "IPv4", 00:16:06.715 "traddr": "10.0.0.2", 00:16:06.716 "trsvcid": "4420" 00:16:06.716 }, 00:16:06.716 "peer_address": { 00:16:06.716 "trtype": "TCP", 00:16:06.716 "adrfam": "IPv4", 00:16:06.716 "traddr": "10.0.0.1", 00:16:06.716 "trsvcid": "49796" 00:16:06.716 }, 00:16:06.716 "auth": { 00:16:06.716 "state": "completed", 00:16:06.716 "digest": "sha512", 00:16:06.716 "dhgroup": "ffdhe6144" 00:16:06.716 } 00:16:06.716 } 00:16:06.716 ]' 00:16:06.716 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.716 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.716 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.716 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.716 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.974 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.974 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.974 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.231 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.165 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.423 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.994 00:16:08.994 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.994 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.994 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.271 { 00:16:09.271 "cntlid": 133, 00:16:09.271 "qid": 0, 00:16:09.271 "state": "enabled", 00:16:09.271 "thread": "nvmf_tgt_poll_group_000", 00:16:09.271 "listen_address": { 00:16:09.271 "trtype": "TCP", 00:16:09.271 "adrfam": "IPv4", 00:16:09.271 "traddr": "10.0.0.2", 00:16:09.271 "trsvcid": "4420" 00:16:09.271 }, 00:16:09.271 "peer_address": { 00:16:09.271 "trtype": "TCP", 00:16:09.271 "adrfam": "IPv4", 00:16:09.271 "traddr": "10.0.0.1", 00:16:09.271 "trsvcid": "49826" 00:16:09.271 }, 00:16:09.271 "auth": { 00:16:09.271 "state": "completed", 00:16:09.271 "digest": "sha512", 00:16:09.271 "dhgroup": "ffdhe6144" 00:16:09.271 } 00:16:09.271 } 00:16:09.271 ]' 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.271 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.529 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.529 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.529 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.529 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.529 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.787 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:10.719 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.977 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.543 00:16:11.543 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.543 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.543 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.801 { 00:16:11.801 "cntlid": 135, 00:16:11.801 "qid": 0, 00:16:11.801 "state": "enabled", 00:16:11.801 "thread": "nvmf_tgt_poll_group_000", 00:16:11.801 "listen_address": { 00:16:11.801 "trtype": "TCP", 00:16:11.801 "adrfam": "IPv4", 00:16:11.801 "traddr": "10.0.0.2", 00:16:11.801 "trsvcid": "4420" 00:16:11.801 }, 00:16:11.801 "peer_address": { 00:16:11.801 "trtype": "TCP", 00:16:11.801 "adrfam": "IPv4", 00:16:11.801 "traddr": "10.0.0.1", 00:16:11.801 "trsvcid": "49844" 00:16:11.801 }, 00:16:11.801 "auth": { 00:16:11.801 "state": "completed", 00:16:11.801 "digest": "sha512", 00:16:11.801 "dhgroup": "ffdhe6144" 00:16:11.801 } 00:16:11.801 } 00:16:11.801 ]' 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.801 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.059 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.991 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:12.992 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.556 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.489 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.489 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.489 { 00:16:14.489 "cntlid": 137, 00:16:14.489 "qid": 0, 00:16:14.489 "state": "enabled", 00:16:14.489 "thread": "nvmf_tgt_poll_group_000", 00:16:14.489 "listen_address": { 00:16:14.489 "trtype": "TCP", 00:16:14.490 "adrfam": "IPv4", 00:16:14.490 "traddr": "10.0.0.2", 00:16:14.490 "trsvcid": "4420" 00:16:14.490 }, 00:16:14.490 "peer_address": { 00:16:14.490 "trtype": "TCP", 00:16:14.490 "adrfam": "IPv4", 00:16:14.490 "traddr": "10.0.0.1", 00:16:14.490 "trsvcid": "49874" 00:16:14.490 }, 00:16:14.490 "auth": { 00:16:14.490 "state": "completed", 00:16:14.490 "digest": "sha512", 00:16:14.490 "dhgroup": "ffdhe8192" 00:16:14.490 } 00:16:14.490 } 00:16:14.490 ]' 00:16:14.490 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.490 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.490 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.490 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.490 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.747 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.747 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.747 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.004 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.937 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.195 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.126 00:16:17.126 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.126 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.126 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.383 { 00:16:17.383 "cntlid": 139, 00:16:17.383 "qid": 0, 00:16:17.383 "state": "enabled", 00:16:17.383 "thread": "nvmf_tgt_poll_group_000", 00:16:17.383 "listen_address": { 00:16:17.383 "trtype": "TCP", 00:16:17.383 "adrfam": "IPv4", 00:16:17.383 "traddr": "10.0.0.2", 00:16:17.383 "trsvcid": "4420" 00:16:17.383 }, 00:16:17.383 "peer_address": { 00:16:17.383 "trtype": "TCP", 00:16:17.383 "adrfam": "IPv4", 00:16:17.383 "traddr": "10.0.0.1", 00:16:17.383 "trsvcid": "48618" 00:16:17.383 }, 00:16:17.383 "auth": { 00:16:17.383 "state": "completed", 00:16:17.383 "digest": "sha512", 00:16:17.383 "dhgroup": "ffdhe8192" 00:16:17.383 } 00:16:17.383 } 00:16:17.383 ]' 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.383 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.640 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWE1OTY3MTNjY2MxNDgxZDA4NTY4YWFkNWMzOWIzZWRCJBqx: --dhchap-ctrl-secret DHHC-1:02:MDQzZDRhNTg4ZmQ1NTM0MjFkNzg0NDgzYmQzOWRhODZlNjYxNWY2Yzk3NDcwNGRjdCgGAQ==: 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.572 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.828 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.759 00:16:19.759 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.759 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.760 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.017 { 00:16:20.017 "cntlid": 141, 00:16:20.017 "qid": 0, 00:16:20.017 "state": "enabled", 00:16:20.017 "thread": "nvmf_tgt_poll_group_000", 00:16:20.017 "listen_address": { 00:16:20.017 "trtype": "TCP", 00:16:20.017 "adrfam": "IPv4", 00:16:20.017 "traddr": "10.0.0.2", 00:16:20.017 "trsvcid": "4420" 00:16:20.017 }, 00:16:20.017 "peer_address": { 00:16:20.017 "trtype": "TCP", 00:16:20.017 "adrfam": "IPv4", 00:16:20.017 "traddr": "10.0.0.1", 00:16:20.017 "trsvcid": "48642" 00:16:20.017 }, 00:16:20.017 "auth": { 00:16:20.017 "state": "completed", 00:16:20.017 "digest": "sha512", 00:16:20.017 "dhgroup": "ffdhe8192" 00:16:20.017 } 00:16:20.017 } 00:16:20.017 ]' 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.017 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.275 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.275 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.275 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.532 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2VkN2NjZTAyNWZlNjdmNDlmZTQwZDQ2ZjJlMGU0YmYyOTczYzFiZDY2NjQwODFmw/YDtg==: --dhchap-ctrl-secret DHHC-1:01:MzkzMjFmNmUwMDYyMjFlM2NkNTEyMWRhZGUzYjVhZmVfRYF0: 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.465 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.722 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.723 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.723 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.656 00:16:22.656 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.656 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.656 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.656 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.656 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.656 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.656 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.656 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.656 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.656 { 00:16:22.656 "cntlid": 143, 00:16:22.656 "qid": 0, 00:16:22.656 "state": "enabled", 00:16:22.656 "thread": "nvmf_tgt_poll_group_000", 00:16:22.656 "listen_address": { 00:16:22.656 "trtype": "TCP", 00:16:22.656 "adrfam": "IPv4", 00:16:22.656 "traddr": "10.0.0.2", 00:16:22.656 "trsvcid": "4420" 00:16:22.656 }, 00:16:22.656 "peer_address": { 00:16:22.656 "trtype": "TCP", 00:16:22.656 "adrfam": "IPv4", 00:16:22.656 "traddr": "10.0.0.1", 00:16:22.656 "trsvcid": "48668" 00:16:22.656 }, 00:16:22.656 "auth": { 00:16:22.656 "state": "completed", 00:16:22.656 "digest": "sha512", 00:16:22.656 "dhgroup": "ffdhe8192" 00:16:22.656 } 00:16:22.656 } 00:16:22.656 ]' 00:16:22.656 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.914 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.914 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.914 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.914 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.914 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.914 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.914 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.172 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:24.104 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.363 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.328 00:16:25.328 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.328 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.328 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.586 { 00:16:25.586 "cntlid": 145, 00:16:25.586 "qid": 0, 00:16:25.586 "state": "enabled", 00:16:25.586 "thread": "nvmf_tgt_poll_group_000", 00:16:25.586 "listen_address": { 00:16:25.586 "trtype": "TCP", 00:16:25.586 "adrfam": "IPv4", 00:16:25.586 "traddr": "10.0.0.2", 00:16:25.586 "trsvcid": "4420" 00:16:25.586 }, 00:16:25.586 "peer_address": { 00:16:25.586 "trtype": "TCP", 00:16:25.586 "adrfam": "IPv4", 00:16:25.586 "traddr": "10.0.0.1", 00:16:25.586 "trsvcid": "43950" 00:16:25.586 }, 00:16:25.586 "auth": { 00:16:25.586 "state": "completed", 00:16:25.586 "digest": "sha512", 00:16:25.586 "dhgroup": "ffdhe8192" 00:16:25.586 } 00:16:25.586 } 00:16:25.586 ]' 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.586 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.586 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.586 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.586 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.586 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.586 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.843 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWJmYjc0MjJjNGEwNDg0NjdlY2QwNzdhZjc1ZDAxNzk3ZGUwZDlkYTdlNzZlMDczeWBEZQ==: --dhchap-ctrl-secret DHHC-1:03:Nzk1YjZlZmVhMTk0ZmYzYzI2YmQzZWIwOTliOWFiNzY0NTA5YjA0ZDY2M2NmNzRmMGQwZTE3MDY3M2UwYTFjZTluWkA=: 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:26.775 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:27.708 request: 00:16:27.708 { 00:16:27.708 "name": "nvme0", 00:16:27.708 "trtype": "tcp", 00:16:27.708 "traddr": "10.0.0.2", 00:16:27.708 "adrfam": "ipv4", 00:16:27.708 "trsvcid": "4420", 00:16:27.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:27.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:27.709 "prchk_reftag": false, 00:16:27.709 "prchk_guard": false, 00:16:27.709 "hdgst": false, 00:16:27.709 "ddgst": false, 00:16:27.709 "dhchap_key": "key2", 00:16:27.709 "method": "bdev_nvme_attach_controller", 00:16:27.709 "req_id": 1 00:16:27.709 } 00:16:27.709 Got JSON-RPC error response 00:16:27.709 response: 00:16:27.709 { 00:16:27.709 "code": -5, 00:16:27.709 "message": "Input/output error" 00:16:27.709 } 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:27.709 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.642 request: 00:16:28.642 { 00:16:28.642 "name": "nvme0", 00:16:28.642 "trtype": "tcp", 00:16:28.642 "traddr": "10.0.0.2", 00:16:28.642 "adrfam": "ipv4", 00:16:28.642 "trsvcid": "4420", 00:16:28.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:28.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:28.642 "prchk_reftag": false, 00:16:28.642 "prchk_guard": false, 00:16:28.642 "hdgst": false, 00:16:28.642 "ddgst": false, 00:16:28.642 "dhchap_key": "key1", 00:16:28.642 "dhchap_ctrlr_key": "ckey2", 00:16:28.642 "method": "bdev_nvme_attach_controller", 00:16:28.642 "req_id": 1 00:16:28.642 } 00:16:28.642 Got JSON-RPC error response 00:16:28.642 response: 00:16:28.642 { 00:16:28.642 "code": -5, 00:16:28.642 "message": "Input/output error" 00:16:28.642 } 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:28.642 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.643 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.575 request: 00:16:29.575 { 00:16:29.575 "name": "nvme0", 00:16:29.575 "trtype": "tcp", 00:16:29.575 "traddr": "10.0.0.2", 00:16:29.575 "adrfam": "ipv4", 00:16:29.575 "trsvcid": "4420", 00:16:29.575 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:29.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:29.575 "prchk_reftag": false, 00:16:29.575 "prchk_guard": false, 00:16:29.575 "hdgst": false, 00:16:29.575 "ddgst": false, 00:16:29.575 "dhchap_key": "key1", 00:16:29.575 "dhchap_ctrlr_key": "ckey1", 00:16:29.575 "method": "bdev_nvme_attach_controller", 00:16:29.575 "req_id": 1 00:16:29.575 } 00:16:29.575 Got JSON-RPC error response 00:16:29.575 response: 00:16:29.575 { 00:16:29.575 "code": -5, 00:16:29.575 "message": "Input/output error" 00:16:29.575 } 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1579110 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1579110 ']' 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1579110 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579110 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579110' 00:16:29.575 killing process with pid 1579110 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1579110 00:16:29.575 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1579110 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1601775 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1601775 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1601775 ']' 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.833 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1601775 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1601775 ']' 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.091 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.349 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.349 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:30.349 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:30.349 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.349 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.606 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.606 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.606 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.606 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.538 00:16:31.538 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.538 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.538 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.795 { 00:16:31.795 "cntlid": 1, 00:16:31.795 "qid": 0, 00:16:31.795 "state": "enabled", 00:16:31.795 "thread": "nvmf_tgt_poll_group_000", 00:16:31.795 "listen_address": { 00:16:31.795 "trtype": "TCP", 00:16:31.795 "adrfam": "IPv4", 00:16:31.795 "traddr": "10.0.0.2", 00:16:31.795 "trsvcid": "4420" 00:16:31.795 }, 00:16:31.795 "peer_address": { 00:16:31.795 "trtype": "TCP", 00:16:31.795 "adrfam": "IPv4", 00:16:31.795 "traddr": "10.0.0.1", 00:16:31.795 "trsvcid": "43994" 00:16:31.795 }, 00:16:31.795 "auth": { 00:16:31.795 "state": "completed", 00:16:31.795 "digest": "sha512", 00:16:31.795 "dhgroup": "ffdhe8192" 00:16:31.795 } 00:16:31.795 } 00:16:31.795 ]' 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.795 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.052 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:M2ZjNTI4NTVhYzAxZTE2Mjg2NDMxNzg3ZTAzMDhiYjNlYTA5MTUxNzAwNTlmYWU2OTljZGQyYTVhYWZjYTAwMHcSyLs=: 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:32.984 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.241 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.498 request: 00:16:33.498 { 00:16:33.498 "name": "nvme0", 00:16:33.498 "trtype": "tcp", 00:16:33.498 "traddr": "10.0.0.2", 00:16:33.498 "adrfam": "ipv4", 00:16:33.498 "trsvcid": "4420", 00:16:33.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:33.498 "prchk_reftag": false, 00:16:33.498 "prchk_guard": false, 00:16:33.498 "hdgst": false, 00:16:33.498 "ddgst": false, 00:16:33.498 "dhchap_key": "key3", 00:16:33.498 "method": "bdev_nvme_attach_controller", 00:16:33.498 "req_id": 1 00:16:33.498 } 00:16:33.498 Got JSON-RPC error response 00:16:33.498 response: 00:16:33.498 { 00:16:33.498 "code": -5, 00:16:33.498 "message": "Input/output error" 00:16:33.498 } 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:33.498 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.755 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.012 request: 00:16:34.012 { 00:16:34.012 "name": "nvme0", 00:16:34.012 "trtype": "tcp", 00:16:34.012 "traddr": "10.0.0.2", 00:16:34.012 "adrfam": "ipv4", 00:16:34.012 "trsvcid": "4420", 00:16:34.012 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:34.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.012 "prchk_reftag": false, 00:16:34.012 "prchk_guard": false, 00:16:34.012 "hdgst": false, 00:16:34.012 "ddgst": false, 00:16:34.012 "dhchap_key": "key3", 00:16:34.012 "method": "bdev_nvme_attach_controller", 00:16:34.012 "req_id": 1 00:16:34.012 } 00:16:34.012 Got JSON-RPC error response 00:16:34.012 response: 00:16:34.012 { 00:16:34.012 "code": -5, 00:16:34.012 "message": "Input/output error" 00:16:34.012 } 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:34.012 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:34.577 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:34.577 request: 00:16:34.577 { 00:16:34.577 "name": "nvme0", 00:16:34.577 "trtype": "tcp", 00:16:34.577 "traddr": "10.0.0.2", 00:16:34.577 "adrfam": "ipv4", 00:16:34.577 "trsvcid": "4420", 00:16:34.577 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:34.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.577 "prchk_reftag": false, 00:16:34.577 "prchk_guard": false, 00:16:34.577 "hdgst": false, 00:16:34.577 "ddgst": false, 00:16:34.577 "dhchap_key": "key0", 00:16:34.577 "dhchap_ctrlr_key": "key1", 00:16:34.577 "method": "bdev_nvme_attach_controller", 00:16:34.577 "req_id": 1 00:16:34.577 } 00:16:34.577 Got JSON-RPC error response 00:16:34.577 response: 00:16:34.577 { 00:16:34.577 "code": -5, 00:16:34.577 "message": "Input/output error" 00:16:34.577 } 00:16:34.577 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:34.577 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.577 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.577 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.577 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:34.577 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:35.141 00:16:35.141 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:35.141 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:35.141 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.141 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.141 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.141 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1579138 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1579138 ']' 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1579138 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.398 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579138 00:16:35.655 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:35.655 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:35.655 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579138' 00:16:35.655 killing process with pid 1579138 00:16:35.655 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1579138 00:16:35.655 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1579138 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.912 rmmod nvme_tcp 00:16:35.912 rmmod nvme_fabrics 00:16:35.912 rmmod nvme_keyring 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1601775 ']' 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1601775 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1601775 ']' 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1601775 00:16:35.912 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:36.170 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.170 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601775 00:16:36.170 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.170 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.170 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601775' 00:16:36.170 killing process with pid 1601775 00:16:36.170 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1601775 00:16:36.170 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1601775 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.427 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.9LF /tmp/spdk.key-sha256.8Pj /tmp/spdk.key-sha384.3AP /tmp/spdk.key-sha512.Bf4 /tmp/spdk.key-sha512.kX3 /tmp/spdk.key-sha384.iJT /tmp/spdk.key-sha256.Yr6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:38.326 00:16:38.326 real 3m9.689s 00:16:38.326 user 7m21.491s 00:16:38.326 sys 0m25.283s 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.326 ************************************ 00:16:38.326 END TEST nvmf_auth_target 00:16:38.326 ************************************ 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.326 ************************************ 00:16:38.326 START TEST nvmf_bdevio_no_huge 00:16:38.326 ************************************ 00:16:38.326 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:38.584 * Looking for test storage... 00:16:38.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.584 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.585 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:40.482 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.483 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.483 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.483 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.483 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.483 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.483 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.483 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.483 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:16:40.483 00:16:40.483 --- 10.0.0.2 ping statistics --- 00:16:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.483 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:40.483 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:16:40.483 00:16:40.483 --- 10.0.0.1 ping statistics --- 00:16:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.483 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1604540 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1604540 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1604540 ']' 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.773 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.773 [2024-07-24 20:44:36.128907] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:16:40.773 [2024-07-24 20:44:36.129010] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:40.773 [2024-07-24 20:44:36.215091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.030 [2024-07-24 20:44:36.342610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.030 [2024-07-24 20:44:36.342673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.030 [2024-07-24 20:44:36.342688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.030 [2024-07-24 20:44:36.342699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.030 [2024-07-24 20:44:36.342709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.030 [2024-07-24 20:44:36.342807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.031 [2024-07-24 20:44:36.342861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:41.031 [2024-07-24 20:44:36.342901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:41.031 [2024-07-24 20:44:36.342905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 [2024-07-24 20:44:36.468475] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 Malloc0 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.031 [2024-07-24 20:44:36.507036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.031 { 00:16:41.031 "params": { 00:16:41.031 "name": "Nvme$subsystem", 00:16:41.031 "trtype": "$TEST_TRANSPORT", 00:16:41.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.031 "adrfam": "ipv4", 00:16:41.031 "trsvcid": "$NVMF_PORT", 00:16:41.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.031 "hdgst": ${hdgst:-false}, 00:16:41.031 "ddgst": ${ddgst:-false} 00:16:41.031 }, 00:16:41.031 "method": "bdev_nvme_attach_controller" 00:16:41.031 } 00:16:41.031 EOF 00:16:41.031 )") 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:41.031 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:41.031 "params": { 00:16:41.031 "name": "Nvme1", 00:16:41.031 "trtype": "tcp", 00:16:41.031 "traddr": "10.0.0.2", 00:16:41.031 "adrfam": "ipv4", 00:16:41.031 "trsvcid": "4420", 00:16:41.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.031 "hdgst": false, 00:16:41.031 "ddgst": false 00:16:41.031 }, 00:16:41.031 "method": "bdev_nvme_attach_controller" 00:16:41.031 }' 00:16:41.031 [2024-07-24 20:44:36.555210] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:16:41.031 [2024-07-24 20:44:36.555306] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1604570 ] 00:16:41.288 [2024-07-24 20:44:36.617679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.288 [2024-07-24 20:44:36.732700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.288 [2024-07-24 20:44:36.736262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.288 [2024-07-24 20:44:36.736274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.546 I/O targets: 00:16:41.546 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:41.546 00:16:41.546 00:16:41.546 CUnit - A unit testing framework for C - Version 2.1-3 00:16:41.546 http://cunit.sourceforge.net/ 00:16:41.546 00:16:41.546 00:16:41.546 Suite: bdevio tests on: Nvme1n1 00:16:41.546 Test: blockdev write read block ...passed 00:16:41.546 Test: blockdev write zeroes read block ...passed 00:16:41.546 Test: blockdev write zeroes read no split ...passed 00:16:41.546 Test: blockdev write zeroes read split ...passed 00:16:41.546 Test: blockdev write zeroes read split partial ...passed 00:16:41.546 Test: blockdev reset ...[2024-07-24 20:44:37.017328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:41.546 [2024-07-24 20:44:37.017444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a1fb0 (9): Bad file descriptor 00:16:41.803 [2024-07-24 20:44:37.161029] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:41.803 passed 00:16:41.803 Test: blockdev write read 8 blocks ...passed 00:16:41.803 Test: blockdev write read size > 128k ...passed 00:16:41.803 Test: blockdev write read invalid size ...passed 00:16:41.803 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:41.803 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:41.803 Test: blockdev write read max offset ...passed 00:16:41.803 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:41.803 Test: blockdev writev readv 8 blocks ...passed 00:16:42.059 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.059 Test: blockdev writev readv block ...passed 00:16:42.059 Test: blockdev writev readv size > 128k ...passed 00:16:42.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.060 Test: blockdev comparev and writev ...[2024-07-24 20:44:37.454904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.454938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.454962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.454978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.455314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.455338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.455359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.455375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.455710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.455733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.455754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.455769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.456089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.456112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.456133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.060 [2024-07-24 20:44:37.456148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.060 passed 00:16:42.060 Test: blockdev nvme passthru rw ...passed 00:16:42.060 Test: blockdev nvme passthru vendor specific ...[2024-07-24 20:44:37.538529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.060 [2024-07-24 20:44:37.538556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.538727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.060 [2024-07-24 20:44:37.538749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.538915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.060 [2024-07-24 20:44:37.538938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.060 [2024-07-24 20:44:37.539114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.060 [2024-07-24 20:44:37.539137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.060 passed 00:16:42.060 Test: blockdev nvme admin passthru ...passed 00:16:42.060 Test: blockdev copy ...passed 00:16:42.060 00:16:42.060 Run Summary: Type Total Ran Passed Failed Inactive 00:16:42.060 suites 1 1 n/a 0 0 00:16:42.060 tests 23 23 23 0 0 00:16:42.060 asserts 152 152 152 0 n/a 00:16:42.060 00:16:42.060 Elapsed time = 1.411 seconds 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.623 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.623 rmmod nvme_tcp 00:16:42.623 rmmod nvme_fabrics 00:16:42.623 rmmod nvme_keyring 00:16:42.623 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.623 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:42.623 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:42.623 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1604540 ']' 00:16:42.623 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1604540 00:16:42.623 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1604540 ']' 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1604540 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1604540 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1604540' 00:16:42.624 killing process with pid 1604540 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1604540 00:16:42.624 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1604540 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.188 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.088 00:16:45.088 real 0m6.651s 00:16:45.088 user 0m11.179s 00:16:45.088 sys 0m2.527s 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.088 ************************************ 00:16:45.088 END TEST nvmf_bdevio_no_huge 00:16:45.088 ************************************ 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:45.088 ************************************ 00:16:45.088 START TEST nvmf_tls 00:16:45.088 ************************************ 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:45.088 * Looking for test storage... 00:16:45.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.088 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:47.614 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:47.614 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:47.614 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:47.614 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.614 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:16:47.615 00:16:47.615 --- 10.0.0.2 ping statistics --- 00:16:47.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.615 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:16:47.615 00:16:47.615 --- 10.0.0.1 ping statistics --- 00:16:47.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.615 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1606762 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1606762 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1606762 ']' 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.615 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.615 [2024-07-24 20:44:42.825298] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:16:47.615 [2024-07-24 20:44:42.825364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.615 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.615 [2024-07-24 20:44:42.892558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.615 [2024-07-24 20:44:43.009081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.615 [2024-07-24 20:44:43.009135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.615 [2024-07-24 20:44:43.009152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.615 [2024-07-24 20:44:43.009165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.615 [2024-07-24 20:44:43.009176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.615 [2024-07-24 20:44:43.009203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:48.545 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:48.802 true 00:16:48.802 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:48.802 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:48.802 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:48.802 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:48.802 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:49.059 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:49.059 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:49.317 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:49.317 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:49.317 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:49.574 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:49.574 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:49.831 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:49.831 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:49.831 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:49.831 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:50.089 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:50.089 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:50.089 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:50.346 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.346 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:50.604 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:50.604 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:50.604 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:50.861 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.861 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.IBDIr3LO2w 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.NwAMjKaUKt 00:16:51.119 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:51.377 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:51.377 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.IBDIr3LO2w 00:16:51.377 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NwAMjKaUKt 00:16:51.377 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:51.635 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:51.892 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.IBDIr3LO2w 00:16:51.892 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IBDIr3LO2w 00:16:51.893 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:52.150 [2024-07-24 20:44:47.602995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.150 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:52.407 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:52.665 [2024-07-24 20:44:48.160557] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.665 [2024-07-24 20:44:48.160841] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.665 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:52.922 malloc0 00:16:52.922 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:53.179 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IBDIr3LO2w 00:16:53.437 [2024-07-24 20:44:48.938026] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:53.437 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IBDIr3LO2w 00:16:53.437 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.661 Initializing NVMe Controllers 00:17:05.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:05.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:05.661 Initialization complete. Launching workers. 00:17:05.661 ======================================================== 00:17:05.661 Latency(us) 00:17:05.661 Device Information : IOPS MiB/s Average min max 00:17:05.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7374.68 28.81 8681.21 1261.94 9483.67 00:17:05.661 ======================================================== 00:17:05.661 Total : 7374.68 28.81 8681.21 1261.94 9483.67 00:17:05.661 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IBDIr3LO2w 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IBDIr3LO2w' 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1608667 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1608667 /var/tmp/bdevperf.sock 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1608667 ']' 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.661 [2024-07-24 20:44:59.129638] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:05.661 [2024-07-24 20:44:59.129718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608667 ] 00:17:05.661 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.661 [2024-07-24 20:44:59.186178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.661 [2024-07-24 20:44:59.291012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IBDIr3LO2w 00:17:05.661 [2024-07-24 20:44:59.680539] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.661 [2024-07-24 20:44:59.680679] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:05.661 TLSTESTn1 00:17:05.661 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:05.661 Running I/O for 10 seconds... 00:17:15.619 00:17:15.619 Latency(us) 00:17:15.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.619 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:15.619 Verification LBA range: start 0x0 length 0x2000 00:17:15.619 TLSTESTn1 : 10.02 3485.86 13.62 0.00 0.00 36654.85 7718.68 33399.09 00:17:15.619 =================================================================================================================== 00:17:15.619 Total : 3485.86 13.62 0.00 0.00 36654.85 7718.68 33399.09 00:17:15.619 0 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1608667 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1608667 ']' 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1608667 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608667 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608667' 00:17:15.619 killing process with pid 1608667 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1608667 00:17:15.619 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.619 00:17:15.619 Latency(us) 00:17:15.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.619 =================================================================================================================== 00:17:15.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.619 [2024-07-24 20:45:09.973839] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:15.619 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1608667 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NwAMjKaUKt 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NwAMjKaUKt 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NwAMjKaUKt 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NwAMjKaUKt' 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1610614 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1610614 /var/tmp/bdevperf.sock 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1610614 ']' 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.619 [2024-07-24 20:45:10.262057] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:15.619 [2024-07-24 20:45:10.262134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610614 ] 00:17:15.619 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.619 [2024-07-24 20:45:10.319801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.619 [2024-07-24 20:45:10.430348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:15.619 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NwAMjKaUKt 00:17:15.619 [2024-07-24 20:45:10.771917] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:15.619 [2024-07-24 20:45:10.772046] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:15.619 [2024-07-24 20:45:10.779794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:15.619 [2024-07-24 20:45:10.779978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2435f90 (107): Transport endpoint is not connected 00:17:15.619 [2024-07-24 20:45:10.780966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2435f90 (9): Bad file descriptor 00:17:15.619 [2024-07-24 20:45:10.781965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:15.619 [2024-07-24 20:45:10.781983] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:15.619 [2024-07-24 20:45:10.781999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:15.619 request: 00:17:15.619 { 00:17:15.619 "name": "TLSTEST", 00:17:15.620 "trtype": "tcp", 00:17:15.620 "traddr": "10.0.0.2", 00:17:15.620 "adrfam": "ipv4", 00:17:15.620 "trsvcid": "4420", 00:17:15.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.620 "prchk_reftag": false, 00:17:15.620 "prchk_guard": false, 00:17:15.620 "hdgst": false, 00:17:15.620 "ddgst": false, 00:17:15.620 "psk": "/tmp/tmp.NwAMjKaUKt", 00:17:15.620 "method": "bdev_nvme_attach_controller", 00:17:15.620 "req_id": 1 00:17:15.620 } 00:17:15.620 Got JSON-RPC error response 00:17:15.620 response: 00:17:15.620 { 00:17:15.620 "code": -5, 00:17:15.620 "message": "Input/output error" 00:17:15.620 } 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1610614 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1610614 ']' 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1610614 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610614 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610614' 00:17:15.620 killing process with pid 1610614 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1610614 00:17:15.620 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.620 00:17:15.620 Latency(us) 00:17:15.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.620 =================================================================================================================== 00:17:15.620 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:15.620 [2024-07-24 20:45:10.831274] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:15.620 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1610614 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IBDIr3LO2w 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IBDIr3LO2w 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IBDIr3LO2w 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IBDIr3LO2w' 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1610752 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1610752 /var/tmp/bdevperf.sock 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1610752 ']' 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.620 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.620 [2024-07-24 20:45:11.130438] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:15.620 [2024-07-24 20:45:11.130517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610752 ] 00:17:15.620 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.878 [2024-07-24 20:45:11.188125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.878 [2024-07-24 20:45:11.291249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.878 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.878 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:15.878 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.IBDIr3LO2w 00:17:16.135 [2024-07-24 20:45:11.677196] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.135 [2024-07-24 20:45:11.677330] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:16.135 [2024-07-24 20:45:11.682593] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:16.135 [2024-07-24 20:45:11.682625] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:16.135 [2024-07-24 20:45:11.682681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:16.135 [2024-07-24 20:45:11.683186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4af90 (107): Transport endpoint is not connected 00:17:16.135 [2024-07-24 20:45:11.684175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4af90 (9): Bad file descriptor 00:17:16.135 [2024-07-24 20:45:11.685173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:16.135 [2024-07-24 20:45:11.685193] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:16.135 [2024-07-24 20:45:11.685224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:16.135 request: 00:17:16.135 { 00:17:16.135 "name": "TLSTEST", 00:17:16.135 "trtype": "tcp", 00:17:16.135 "traddr": "10.0.0.2", 00:17:16.135 "adrfam": "ipv4", 00:17:16.135 "trsvcid": "4420", 00:17:16.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.135 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:16.135 "prchk_reftag": false, 00:17:16.135 "prchk_guard": false, 00:17:16.135 "hdgst": false, 00:17:16.135 "ddgst": false, 00:17:16.135 "psk": "/tmp/tmp.IBDIr3LO2w", 00:17:16.135 "method": "bdev_nvme_attach_controller", 00:17:16.135 "req_id": 1 00:17:16.136 } 00:17:16.136 Got JSON-RPC error response 00:17:16.136 response: 00:17:16.136 { 00:17:16.136 "code": -5, 00:17:16.136 "message": "Input/output error" 00:17:16.136 } 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1610752 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1610752 ']' 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1610752 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610752 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610752' 00:17:16.393 killing process with pid 1610752 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1610752 00:17:16.393 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.393 00:17:16.393 Latency(us) 00:17:16.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.393 =================================================================================================================== 00:17:16.393 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.393 [2024-07-24 20:45:11.733141] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:16.393 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1610752 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IBDIr3LO2w 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IBDIr3LO2w 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IBDIr3LO2w 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IBDIr3LO2w' 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1610794 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1610794 /var/tmp/bdevperf.sock 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1610794 ']' 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.651 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.651 [2024-07-24 20:45:12.034733] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:16.652 [2024-07-24 20:45:12.034826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610794 ] 00:17:16.652 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.652 [2024-07-24 20:45:12.097958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.652 [2024-07-24 20:45:12.211108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.909 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.909 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:16.909 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IBDIr3LO2w 00:17:17.167 [2024-07-24 20:45:12.597432] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.167 [2024-07-24 20:45:12.597581] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:17.167 [2024-07-24 20:45:12.604393] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:17.167 [2024-07-24 20:45:12.604424] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:17.167 [2024-07-24 20:45:12.604489] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.167 [2024-07-24 20:45:12.605427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9f90 (107): Transport endpoint is not connected 00:17:17.167 [2024-07-24 20:45:12.606417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9f90 (9): Bad file descriptor 00:17:17.167 [2024-07-24 20:45:12.607414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:17.167 [2024-07-24 20:45:12.607433] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.167 [2024-07-24 20:45:12.607450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:17.167 request: 00:17:17.167 { 00:17:17.167 "name": "TLSTEST", 00:17:17.167 "trtype": "tcp", 00:17:17.167 "traddr": "10.0.0.2", 00:17:17.167 "adrfam": "ipv4", 00:17:17.167 "trsvcid": "4420", 00:17:17.167 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.167 "prchk_reftag": false, 00:17:17.167 "prchk_guard": false, 00:17:17.167 "hdgst": false, 00:17:17.167 "ddgst": false, 00:17:17.167 "psk": "/tmp/tmp.IBDIr3LO2w", 00:17:17.167 "method": "bdev_nvme_attach_controller", 00:17:17.167 "req_id": 1 00:17:17.167 } 00:17:17.167 Got JSON-RPC error response 00:17:17.167 response: 00:17:17.167 { 00:17:17.167 "code": -5, 00:17:17.167 "message": "Input/output error" 00:17:17.167 } 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1610794 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1610794 ']' 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1610794 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610794 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610794' 00:17:17.167 killing process with pid 1610794 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1610794 00:17:17.167 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.167 00:17:17.167 Latency(us) 00:17:17.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.167 =================================================================================================================== 00:17:17.167 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.167 [2024-07-24 20:45:12.658787] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:17.167 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1610794 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1610904 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1610904 /var/tmp/bdevperf.sock 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1610904 ']' 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.426 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.426 [2024-07-24 20:45:12.963542] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:17.426 [2024-07-24 20:45:12.963623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610904 ] 00:17:17.426 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.684 [2024-07-24 20:45:13.021757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.684 [2024-07-24 20:45:13.136939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.684 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.684 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:17.684 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:17.942 [2024-07-24 20:45:13.466782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.942 [2024-07-24 20:45:13.468417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206a770 (9): Bad file descriptor 00:17:17.942 [2024-07-24 20:45:13.469413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:17.942 [2024-07-24 20:45:13.469432] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.942 [2024-07-24 20:45:13.469464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.942 request: 00:17:17.942 { 00:17:17.942 "name": "TLSTEST", 00:17:17.942 "trtype": "tcp", 00:17:17.942 "traddr": "10.0.0.2", 00:17:17.942 "adrfam": "ipv4", 00:17:17.942 "trsvcid": "4420", 00:17:17.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.942 "prchk_reftag": false, 00:17:17.942 "prchk_guard": false, 00:17:17.942 "hdgst": false, 00:17:17.942 "ddgst": false, 00:17:17.942 "method": "bdev_nvme_attach_controller", 00:17:17.942 "req_id": 1 00:17:17.942 } 00:17:17.942 Got JSON-RPC error response 00:17:17.942 response: 00:17:17.942 { 00:17:17.942 "code": -5, 00:17:17.942 "message": "Input/output error" 00:17:17.942 } 00:17:17.942 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1610904 00:17:17.942 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1610904 ']' 00:17:17.942 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1610904 00:17:17.942 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:17.942 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.942 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610904 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610904' 00:17:18.200 killing process with pid 1610904 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1610904 00:17:18.200 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.200 00:17:18.200 Latency(us) 00:17:18.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.200 =================================================================================================================== 00:17:18.200 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1610904 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1606762 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1606762 ']' 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1606762 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.200 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1606762 00:17:18.457 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:18.457 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:18.457 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1606762' 00:17:18.457 killing process with pid 1606762 00:17:18.457 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1606762 00:17:18.457 [2024-07-24 20:45:13.782526] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:18.457 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1606762 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.dZ16jTHSF8 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.dZ16jTHSF8 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1611065 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1611065 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1611065 ']' 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.715 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.715 [2024-07-24 20:45:14.184235] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:18.715 [2024-07-24 20:45:14.184338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.715 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.715 [2024-07-24 20:45:14.253138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.973 [2024-07-24 20:45:14.369964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.973 [2024-07-24 20:45:14.370028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.973 [2024-07-24 20:45:14.370045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.973 [2024-07-24 20:45:14.370059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.973 [2024-07-24 20:45:14.370071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.973 [2024-07-24 20:45:14.370101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.dZ16jTHSF8 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dZ16jTHSF8 00:17:18.973 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:19.230 [2024-07-24 20:45:14.767418] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.230 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:19.487 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:19.744 [2024-07-24 20:45:15.248743] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.744 [2024-07-24 20:45:15.248990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.744 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:20.002 malloc0 00:17:20.002 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:20.260 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dZ16jTHSF8 00:17:20.516 [2024-07-24 20:45:15.990369] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dZ16jTHSF8 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dZ16jTHSF8' 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1611337 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1611337 /var/tmp/bdevperf.sock 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1611337 ']' 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.516 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.516 [2024-07-24 20:45:16.054951] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:20.516 [2024-07-24 20:45:16.055023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611337 ] 00:17:20.516 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.774 [2024-07-24 20:45:16.111797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.774 [2024-07-24 20:45:16.217610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.774 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.774 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:20.774 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dZ16jTHSF8 00:17:21.031 [2024-07-24 20:45:16.562034] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.031 [2024-07-24 20:45:16.562152] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:21.289 TLSTESTn1 00:17:21.289 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:21.289 Running I/O for 10 seconds... 00:17:31.264 00:17:31.264 Latency(us) 00:17:31.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.264 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:31.264 Verification LBA range: start 0x0 length 0x2000 00:17:31.264 TLSTESTn1 : 10.02 3506.76 13.70 0.00 0.00 36425.87 5898.24 34564.17 00:17:31.264 =================================================================================================================== 00:17:31.264 Total : 3506.76 13.70 0.00 0.00 36425.87 5898.24 34564.17 00:17:31.264 0 00:17:31.264 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.264 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1611337 00:17:31.264 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1611337 ']' 00:17:31.264 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1611337 00:17:31.264 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:31.264 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.264 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611337 00:17:31.522 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:31.522 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:31.522 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611337' 00:17:31.522 killing process with pid 1611337 00:17:31.522 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1611337 00:17:31.522 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.522 00:17:31.522 Latency(us) 00:17:31.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.522 =================================================================================================================== 00:17:31.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.522 [2024-07-24 20:45:26.843199] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:31.522 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1611337 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.dZ16jTHSF8 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dZ16jTHSF8 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dZ16jTHSF8 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dZ16jTHSF8 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dZ16jTHSF8' 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1612657 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1612657 /var/tmp/bdevperf.sock 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1612657 ']' 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.780 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.780 [2024-07-24 20:45:27.165646] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:31.780 [2024-07-24 20:45:27.165725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612657 ] 00:17:31.780 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.780 [2024-07-24 20:45:27.221997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.780 [2024-07-24 20:45:27.326102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.038 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.038 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:32.038 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dZ16jTHSF8 00:17:32.295 [2024-07-24 20:45:27.711995] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.295 [2024-07-24 20:45:27.712089] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:32.295 [2024-07-24 20:45:27.712103] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.dZ16jTHSF8 00:17:32.295 request: 00:17:32.295 { 00:17:32.295 "name": "TLSTEST", 00:17:32.295 "trtype": "tcp", 00:17:32.295 "traddr": "10.0.0.2", 00:17:32.295 "adrfam": "ipv4", 00:17:32.295 "trsvcid": "4420", 00:17:32.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.295 "prchk_reftag": false, 00:17:32.295 "prchk_guard": false, 00:17:32.295 "hdgst": false, 00:17:32.295 "ddgst": false, 00:17:32.295 "psk": "/tmp/tmp.dZ16jTHSF8", 00:17:32.295 "method": "bdev_nvme_attach_controller", 00:17:32.295 "req_id": 1 00:17:32.295 } 00:17:32.295 Got JSON-RPC error response 00:17:32.295 response: 00:17:32.295 { 00:17:32.295 "code": -1, 00:17:32.295 "message": "Operation not permitted" 00:17:32.295 } 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1612657 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1612657 ']' 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1612657 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612657 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612657' 00:17:32.295 killing process with pid 1612657 00:17:32.295 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1612657 00:17:32.295 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.295 00:17:32.295 Latency(us) 00:17:32.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.296 =================================================================================================================== 00:17:32.296 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.296 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1612657 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1611065 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1611065 ']' 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1611065 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611065 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611065' 00:17:32.552 killing process with pid 1611065 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1611065 00:17:32.552 [2024-07-24 20:45:28.041852] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:32.552 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1611065 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1612807 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1612807 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1612807 ']' 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.843 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.843 [2024-07-24 20:45:28.371584] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:32.844 [2024-07-24 20:45:28.371680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.101 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.101 [2024-07-24 20:45:28.437938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.101 [2024-07-24 20:45:28.543148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.101 [2024-07-24 20:45:28.543204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.101 [2024-07-24 20:45:28.543233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.101 [2024-07-24 20:45:28.543250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.101 [2024-07-24 20:45:28.543261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.101 [2024-07-24 20:45:28.543301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.101 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.101 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:33.101 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.101 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.101 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.dZ16jTHSF8 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.dZ16jTHSF8 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.dZ16jTHSF8 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dZ16jTHSF8 00:17:33.359 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.359 [2024-07-24 20:45:28.912733] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.616 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:33.616 20:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:33.873 [2024-07-24 20:45:29.410111] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.873 [2024-07-24 20:45:29.410400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.873 20:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:34.438 malloc0 00:17:34.438 20:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:34.438 20:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dZ16jTHSF8 00:17:34.695 [2024-07-24 20:45:30.240267] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:34.695 [2024-07-24 20:45:30.240319] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:34.695 [2024-07-24 20:45:30.240357] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:34.695 request: 00:17:34.695 { 00:17:34.695 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.695 "host": "nqn.2016-06.io.spdk:host1", 00:17:34.695 "psk": "/tmp/tmp.dZ16jTHSF8", 00:17:34.695 "method": "nvmf_subsystem_add_host", 00:17:34.695 "req_id": 1 00:17:34.695 } 00:17:34.695 Got JSON-RPC error response 00:17:34.695 response: 00:17:34.695 { 00:17:34.695 "code": -32603, 00:17:34.695 "message": "Internal error" 00:17:34.695 } 00:17:34.695 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:34.695 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.695 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.695 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.695 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1612807 00:17:34.695 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1612807 ']' 00:17:34.695 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1612807 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612807 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612807' 00:17:34.953 killing process with pid 1612807 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1612807 00:17:34.953 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1612807 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.dZ16jTHSF8 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1613103 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1613103 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1613103 ']' 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:35.211 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.211 [2024-07-24 20:45:30.650328] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:35.211 [2024-07-24 20:45:30.650408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.211 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.211 [2024-07-24 20:45:30.718019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.468 [2024-07-24 20:45:30.829915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.468 [2024-07-24 20:45:30.829979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.468 [2024-07-24 20:45:30.829995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.468 [2024-07-24 20:45:30.830008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.468 [2024-07-24 20:45:30.830019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.468 [2024-07-24 20:45:30.830056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.032 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.032 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:36.032 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.032 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.032 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.290 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.290 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.dZ16jTHSF8 00:17:36.290 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dZ16jTHSF8 00:17:36.290 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.290 [2024-07-24 20:45:31.833985] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.290 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:36.547 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:36.805 [2024-07-24 20:45:32.327372] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.805 [2024-07-24 20:45:32.327637] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.805 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.062 malloc0 00:17:37.062 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.320 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dZ16jTHSF8 00:17:37.577 [2024-07-24 20:45:33.082012] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1613394 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1613394 /var/tmp/bdevperf.sock 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1613394 ']' 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.577 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.577 [2024-07-24 20:45:33.143207] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:37.577 [2024-07-24 20:45:33.143293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613394 ] 00:17:37.834 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.834 [2024-07-24 20:45:33.200509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.834 [2024-07-24 20:45:33.309326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.090 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.090 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:38.090 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dZ16jTHSF8 00:17:38.347 [2024-07-24 20:45:33.683688] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.347 [2024-07-24 20:45:33.683809] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.347 TLSTESTn1 00:17:38.347 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:38.605 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:38.605 "subsystems": [ 00:17:38.605 { 00:17:38.605 "subsystem": "keyring", 00:17:38.605 "config": [] 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "subsystem": "iobuf", 00:17:38.605 "config": [ 00:17:38.605 { 00:17:38.605 "method": "iobuf_set_options", 00:17:38.605 "params": { 00:17:38.605 "small_pool_count": 8192, 00:17:38.605 "large_pool_count": 1024, 00:17:38.605 "small_bufsize": 8192, 00:17:38.605 "large_bufsize": 135168 00:17:38.605 } 00:17:38.605 } 00:17:38.605 ] 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "subsystem": "sock", 00:17:38.605 "config": [ 00:17:38.605 { 00:17:38.605 "method": "sock_set_default_impl", 00:17:38.605 "params": { 00:17:38.605 "impl_name": "posix" 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "sock_impl_set_options", 00:17:38.605 "params": { 00:17:38.605 "impl_name": "ssl", 00:17:38.605 "recv_buf_size": 4096, 00:17:38.605 "send_buf_size": 4096, 00:17:38.605 "enable_recv_pipe": true, 00:17:38.605 "enable_quickack": false, 00:17:38.605 "enable_placement_id": 0, 00:17:38.605 "enable_zerocopy_send_server": true, 00:17:38.605 "enable_zerocopy_send_client": false, 00:17:38.605 "zerocopy_threshold": 0, 00:17:38.605 "tls_version": 0, 00:17:38.605 "enable_ktls": false 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "sock_impl_set_options", 00:17:38.605 "params": { 00:17:38.605 "impl_name": "posix", 00:17:38.605 "recv_buf_size": 2097152, 00:17:38.605 "send_buf_size": 2097152, 00:17:38.605 "enable_recv_pipe": true, 00:17:38.605 "enable_quickack": false, 00:17:38.605 "enable_placement_id": 0, 00:17:38.605 "enable_zerocopy_send_server": true, 00:17:38.605 "enable_zerocopy_send_client": false, 00:17:38.605 "zerocopy_threshold": 0, 00:17:38.605 "tls_version": 0, 00:17:38.605 "enable_ktls": false 00:17:38.605 } 00:17:38.605 } 00:17:38.605 ] 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "subsystem": "vmd", 00:17:38.605 "config": [] 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "subsystem": "accel", 00:17:38.605 "config": [ 00:17:38.605 { 00:17:38.605 "method": "accel_set_options", 00:17:38.605 "params": { 00:17:38.605 "small_cache_size": 128, 00:17:38.605 "large_cache_size": 16, 00:17:38.605 "task_count": 2048, 00:17:38.605 "sequence_count": 2048, 00:17:38.605 "buf_count": 2048 00:17:38.605 } 00:17:38.605 } 00:17:38.605 ] 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "subsystem": "bdev", 00:17:38.605 "config": [ 00:17:38.605 { 00:17:38.605 "method": "bdev_set_options", 00:17:38.605 "params": { 00:17:38.605 "bdev_io_pool_size": 65535, 00:17:38.605 "bdev_io_cache_size": 256, 00:17:38.605 "bdev_auto_examine": true, 00:17:38.605 "iobuf_small_cache_size": 128, 00:17:38.605 "iobuf_large_cache_size": 16 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "bdev_raid_set_options", 00:17:38.605 "params": { 00:17:38.605 "process_window_size_kb": 1024, 00:17:38.605 "process_max_bandwidth_mb_sec": 0 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "bdev_iscsi_set_options", 00:17:38.605 "params": { 00:17:38.605 "timeout_sec": 30 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "bdev_nvme_set_options", 00:17:38.605 "params": { 00:17:38.605 "action_on_timeout": "none", 00:17:38.605 "timeout_us": 0, 00:17:38.605 "timeout_admin_us": 0, 00:17:38.605 "keep_alive_timeout_ms": 10000, 00:17:38.605 "arbitration_burst": 0, 00:17:38.605 "low_priority_weight": 0, 00:17:38.605 "medium_priority_weight": 0, 00:17:38.605 "high_priority_weight": 0, 00:17:38.605 "nvme_adminq_poll_period_us": 10000, 00:17:38.605 "nvme_ioq_poll_period_us": 0, 00:17:38.605 "io_queue_requests": 0, 00:17:38.605 "delay_cmd_submit": true, 00:17:38.605 "transport_retry_count": 4, 00:17:38.605 "bdev_retry_count": 3, 00:17:38.605 "transport_ack_timeout": 0, 00:17:38.605 "ctrlr_loss_timeout_sec": 0, 00:17:38.605 "reconnect_delay_sec": 0, 00:17:38.605 "fast_io_fail_timeout_sec": 0, 00:17:38.605 "disable_auto_failback": false, 00:17:38.605 "generate_uuids": false, 00:17:38.605 "transport_tos": 0, 00:17:38.605 "nvme_error_stat": false, 00:17:38.605 "rdma_srq_size": 0, 00:17:38.605 "io_path_stat": false, 00:17:38.605 "allow_accel_sequence": false, 00:17:38.605 "rdma_max_cq_size": 0, 00:17:38.605 "rdma_cm_event_timeout_ms": 0, 00:17:38.605 "dhchap_digests": [ 00:17:38.605 "sha256", 00:17:38.605 "sha384", 00:17:38.605 "sha512" 00:17:38.605 ], 00:17:38.605 "dhchap_dhgroups": [ 00:17:38.605 "null", 00:17:38.605 "ffdhe2048", 00:17:38.605 "ffdhe3072", 00:17:38.605 "ffdhe4096", 00:17:38.605 "ffdhe6144", 00:17:38.605 "ffdhe8192" 00:17:38.605 ] 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "bdev_nvme_set_hotplug", 00:17:38.605 "params": { 00:17:38.605 "period_us": 100000, 00:17:38.605 "enable": false 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "bdev_malloc_create", 00:17:38.605 "params": { 00:17:38.605 "name": "malloc0", 00:17:38.605 "num_blocks": 8192, 00:17:38.605 "block_size": 4096, 00:17:38.605 "physical_block_size": 4096, 00:17:38.605 "uuid": "950d5d3a-28c0-4b1c-92e3-6177c1d634c5", 00:17:38.605 "optimal_io_boundary": 0, 00:17:38.605 "md_size": 0, 00:17:38.605 "dif_type": 0, 00:17:38.605 "dif_is_head_of_md": false, 00:17:38.605 "dif_pi_format": 0 00:17:38.605 } 00:17:38.605 }, 00:17:38.605 { 00:17:38.605 "method": "bdev_wait_for_examine" 00:17:38.606 } 00:17:38.606 ] 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "subsystem": "nbd", 00:17:38.606 "config": [] 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "subsystem": "scheduler", 00:17:38.606 "config": [ 00:17:38.606 { 00:17:38.606 "method": "framework_set_scheduler", 00:17:38.606 "params": { 00:17:38.606 "name": "static" 00:17:38.606 } 00:17:38.606 } 00:17:38.606 ] 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "subsystem": "nvmf", 00:17:38.606 "config": [ 00:17:38.606 { 00:17:38.606 "method": "nvmf_set_config", 00:17:38.606 "params": { 00:17:38.606 "discovery_filter": "match_any", 00:17:38.606 "admin_cmd_passthru": { 00:17:38.606 "identify_ctrlr": false 00:17:38.606 } 00:17:38.606 } 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "method": "nvmf_set_max_subsystems", 00:17:38.606 "params": { 00:17:38.606 "max_subsystems": 1024 00:17:38.606 } 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "method": "nvmf_set_crdt", 00:17:38.606 "params": { 00:17:38.606 "crdt1": 0, 00:17:38.606 "crdt2": 0, 00:17:38.606 "crdt3": 0 00:17:38.606 } 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "method": "nvmf_create_transport", 00:17:38.606 "params": { 00:17:38.606 "trtype": "TCP", 00:17:38.606 "max_queue_depth": 128, 00:17:38.606 "max_io_qpairs_per_ctrlr": 127, 00:17:38.606 "in_capsule_data_size": 4096, 00:17:38.606 "max_io_size": 131072, 00:17:38.606 "io_unit_size": 131072, 00:17:38.606 "max_aq_depth": 128, 00:17:38.606 "num_shared_buffers": 511, 00:17:38.606 "buf_cache_size": 4294967295, 00:17:38.606 "dif_insert_or_strip": false, 00:17:38.606 "zcopy": false, 00:17:38.606 "c2h_success": false, 00:17:38.606 "sock_priority": 0, 00:17:38.606 "abort_timeout_sec": 1, 00:17:38.606 "ack_timeout": 0, 00:17:38.606 "data_wr_pool_size": 0 00:17:38.606 } 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "method": "nvmf_create_subsystem", 00:17:38.606 "params": { 00:17:38.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.606 "allow_any_host": false, 00:17:38.606 "serial_number": "SPDK00000000000001", 00:17:38.606 "model_number": "SPDK bdev Controller", 00:17:38.606 "max_namespaces": 10, 00:17:38.606 "min_cntlid": 1, 00:17:38.606 "max_cntlid": 65519, 00:17:38.606 "ana_reporting": false 00:17:38.606 } 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "method": "nvmf_subsystem_add_host", 00:17:38.606 "params": { 00:17:38.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.606 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.606 "psk": "/tmp/tmp.dZ16jTHSF8" 00:17:38.606 } 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "method": "nvmf_subsystem_add_ns", 00:17:38.606 "params": { 00:17:38.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.606 "namespace": { 00:17:38.606 "nsid": 1, 00:17:38.606 "bdev_name": "malloc0", 00:17:38.606 "nguid": "950D5D3A28C04B1C92E36177C1D634C5", 00:17:38.606 "uuid": "950d5d3a-28c0-4b1c-92e3-6177c1d634c5", 00:17:38.606 "no_auto_visible": false 00:17:38.606 } 00:17:38.606 } 00:17:38.606 }, 00:17:38.606 { 00:17:38.606 "method": "nvmf_subsystem_add_listener", 00:17:38.606 "params": { 00:17:38.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.606 "listen_address": { 00:17:38.606 "trtype": "TCP", 00:17:38.606 "adrfam": "IPv4", 00:17:38.606 "traddr": "10.0.0.2", 00:17:38.606 "trsvcid": "4420" 00:17:38.606 }, 00:17:38.606 "secure_channel": true 00:17:38.606 } 00:17:38.606 } 00:17:38.606 ] 00:17:38.606 } 00:17:38.606 ] 00:17:38.606 }' 00:17:38.606 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:39.170 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:39.170 "subsystems": [ 00:17:39.170 { 00:17:39.170 "subsystem": "keyring", 00:17:39.170 "config": [] 00:17:39.170 }, 00:17:39.170 { 00:17:39.170 "subsystem": "iobuf", 00:17:39.170 "config": [ 00:17:39.170 { 00:17:39.170 "method": "iobuf_set_options", 00:17:39.170 "params": { 00:17:39.170 "small_pool_count": 8192, 00:17:39.170 "large_pool_count": 1024, 00:17:39.170 "small_bufsize": 8192, 00:17:39.170 "large_bufsize": 135168 00:17:39.170 } 00:17:39.170 } 00:17:39.170 ] 00:17:39.170 }, 00:17:39.170 { 00:17:39.170 "subsystem": "sock", 00:17:39.170 "config": [ 00:17:39.170 { 00:17:39.170 "method": "sock_set_default_impl", 00:17:39.170 "params": { 00:17:39.170 "impl_name": "posix" 00:17:39.170 } 00:17:39.170 }, 00:17:39.170 { 00:17:39.170 "method": "sock_impl_set_options", 00:17:39.170 "params": { 00:17:39.170 "impl_name": "ssl", 00:17:39.170 "recv_buf_size": 4096, 00:17:39.170 "send_buf_size": 4096, 00:17:39.170 "enable_recv_pipe": true, 00:17:39.170 "enable_quickack": false, 00:17:39.170 "enable_placement_id": 0, 00:17:39.170 "enable_zerocopy_send_server": true, 00:17:39.170 "enable_zerocopy_send_client": false, 00:17:39.170 "zerocopy_threshold": 0, 00:17:39.170 "tls_version": 0, 00:17:39.170 "enable_ktls": false 00:17:39.170 } 00:17:39.170 }, 00:17:39.170 { 00:17:39.170 "method": "sock_impl_set_options", 00:17:39.170 "params": { 00:17:39.170 "impl_name": "posix", 00:17:39.170 "recv_buf_size": 2097152, 00:17:39.171 "send_buf_size": 2097152, 00:17:39.171 "enable_recv_pipe": true, 00:17:39.171 "enable_quickack": false, 00:17:39.171 "enable_placement_id": 0, 00:17:39.171 "enable_zerocopy_send_server": true, 00:17:39.171 "enable_zerocopy_send_client": false, 00:17:39.171 "zerocopy_threshold": 0, 00:17:39.171 "tls_version": 0, 00:17:39.171 "enable_ktls": false 00:17:39.171 } 00:17:39.171 } 00:17:39.171 ] 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "subsystem": "vmd", 00:17:39.171 "config": [] 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "subsystem": "accel", 00:17:39.171 "config": [ 00:17:39.171 { 00:17:39.171 "method": "accel_set_options", 00:17:39.171 "params": { 00:17:39.171 "small_cache_size": 128, 00:17:39.171 "large_cache_size": 16, 00:17:39.171 "task_count": 2048, 00:17:39.171 "sequence_count": 2048, 00:17:39.171 "buf_count": 2048 00:17:39.171 } 00:17:39.171 } 00:17:39.171 ] 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "subsystem": "bdev", 00:17:39.171 "config": [ 00:17:39.171 { 00:17:39.171 "method": "bdev_set_options", 00:17:39.171 "params": { 00:17:39.171 "bdev_io_pool_size": 65535, 00:17:39.171 "bdev_io_cache_size": 256, 00:17:39.171 "bdev_auto_examine": true, 00:17:39.171 "iobuf_small_cache_size": 128, 00:17:39.171 "iobuf_large_cache_size": 16 00:17:39.171 } 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "method": "bdev_raid_set_options", 00:17:39.171 "params": { 00:17:39.171 "process_window_size_kb": 1024, 00:17:39.171 "process_max_bandwidth_mb_sec": 0 00:17:39.171 } 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "method": "bdev_iscsi_set_options", 00:17:39.171 "params": { 00:17:39.171 "timeout_sec": 30 00:17:39.171 } 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "method": "bdev_nvme_set_options", 00:17:39.171 "params": { 00:17:39.171 "action_on_timeout": "none", 00:17:39.171 "timeout_us": 0, 00:17:39.171 "timeout_admin_us": 0, 00:17:39.171 "keep_alive_timeout_ms": 10000, 00:17:39.171 "arbitration_burst": 0, 00:17:39.171 "low_priority_weight": 0, 00:17:39.171 "medium_priority_weight": 0, 00:17:39.171 "high_priority_weight": 0, 00:17:39.171 "nvme_adminq_poll_period_us": 10000, 00:17:39.171 "nvme_ioq_poll_period_us": 0, 00:17:39.171 "io_queue_requests": 512, 00:17:39.171 "delay_cmd_submit": true, 00:17:39.171 "transport_retry_count": 4, 00:17:39.171 "bdev_retry_count": 3, 00:17:39.171 "transport_ack_timeout": 0, 00:17:39.171 "ctrlr_loss_timeout_sec": 0, 00:17:39.171 "reconnect_delay_sec": 0, 00:17:39.171 "fast_io_fail_timeout_sec": 0, 00:17:39.171 "disable_auto_failback": false, 00:17:39.171 "generate_uuids": false, 00:17:39.171 "transport_tos": 0, 00:17:39.171 "nvme_error_stat": false, 00:17:39.171 "rdma_srq_size": 0, 00:17:39.171 "io_path_stat": false, 00:17:39.171 "allow_accel_sequence": false, 00:17:39.171 "rdma_max_cq_size": 0, 00:17:39.171 "rdma_cm_event_timeout_ms": 0, 00:17:39.171 "dhchap_digests": [ 00:17:39.171 "sha256", 00:17:39.171 "sha384", 00:17:39.171 "sha512" 00:17:39.171 ], 00:17:39.171 "dhchap_dhgroups": [ 00:17:39.171 "null", 00:17:39.171 "ffdhe2048", 00:17:39.171 "ffdhe3072", 00:17:39.171 "ffdhe4096", 00:17:39.171 "ffdhe6144", 00:17:39.171 "ffdhe8192" 00:17:39.171 ] 00:17:39.171 } 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "method": "bdev_nvme_attach_controller", 00:17:39.171 "params": { 00:17:39.171 "name": "TLSTEST", 00:17:39.171 "trtype": "TCP", 00:17:39.171 "adrfam": "IPv4", 00:17:39.171 "traddr": "10.0.0.2", 00:17:39.171 "trsvcid": "4420", 00:17:39.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.171 "prchk_reftag": false, 00:17:39.171 "prchk_guard": false, 00:17:39.171 "ctrlr_loss_timeout_sec": 0, 00:17:39.171 "reconnect_delay_sec": 0, 00:17:39.171 "fast_io_fail_timeout_sec": 0, 00:17:39.171 "psk": "/tmp/tmp.dZ16jTHSF8", 00:17:39.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.171 "hdgst": false, 00:17:39.171 "ddgst": false 00:17:39.171 } 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "method": "bdev_nvme_set_hotplug", 00:17:39.171 "params": { 00:17:39.171 "period_us": 100000, 00:17:39.171 "enable": false 00:17:39.171 } 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "method": "bdev_wait_for_examine" 00:17:39.171 } 00:17:39.171 ] 00:17:39.171 }, 00:17:39.171 { 00:17:39.171 "subsystem": "nbd", 00:17:39.171 "config": [] 00:17:39.171 } 00:17:39.171 ] 00:17:39.171 }' 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1613394 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1613394 ']' 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1613394 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613394 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613394' 00:17:39.171 killing process with pid 1613394 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1613394 00:17:39.171 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.171 00:17:39.171 Latency(us) 00:17:39.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.171 =================================================================================================================== 00:17:39.171 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.171 [2024-07-24 20:45:34.531662] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:39.171 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1613394 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1613103 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1613103 ']' 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1613103 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613103 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613103' 00:17:39.429 killing process with pid 1613103 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1613103 00:17:39.429 [2024-07-24 20:45:34.818807] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:39.429 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1613103 00:17:39.687 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:39.687 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.687 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:39.687 "subsystems": [ 00:17:39.687 { 00:17:39.687 "subsystem": "keyring", 00:17:39.687 "config": [] 00:17:39.687 }, 00:17:39.687 { 00:17:39.687 "subsystem": "iobuf", 00:17:39.687 "config": [ 00:17:39.687 { 00:17:39.687 "method": "iobuf_set_options", 00:17:39.687 "params": { 00:17:39.687 "small_pool_count": 8192, 00:17:39.687 "large_pool_count": 1024, 00:17:39.687 "small_bufsize": 8192, 00:17:39.687 "large_bufsize": 135168 00:17:39.687 } 00:17:39.687 } 00:17:39.687 ] 00:17:39.687 }, 00:17:39.687 { 00:17:39.687 "subsystem": "sock", 00:17:39.687 "config": [ 00:17:39.687 { 00:17:39.687 "method": "sock_set_default_impl", 00:17:39.687 "params": { 00:17:39.687 "impl_name": "posix" 00:17:39.687 } 00:17:39.687 }, 00:17:39.687 { 00:17:39.687 "method": "sock_impl_set_options", 00:17:39.687 "params": { 00:17:39.687 "impl_name": "ssl", 00:17:39.687 "recv_buf_size": 4096, 00:17:39.687 "send_buf_size": 4096, 00:17:39.687 "enable_recv_pipe": true, 00:17:39.687 "enable_quickack": false, 00:17:39.687 "enable_placement_id": 0, 00:17:39.687 "enable_zerocopy_send_server": true, 00:17:39.687 "enable_zerocopy_send_client": false, 00:17:39.687 "zerocopy_threshold": 0, 00:17:39.687 "tls_version": 0, 00:17:39.687 "enable_ktls": false 00:17:39.687 } 00:17:39.687 }, 00:17:39.687 { 00:17:39.687 "method": "sock_impl_set_options", 00:17:39.687 "params": { 00:17:39.687 "impl_name": "posix", 00:17:39.687 "recv_buf_size": 2097152, 00:17:39.687 "send_buf_size": 2097152, 00:17:39.687 "enable_recv_pipe": true, 00:17:39.687 "enable_quickack": false, 00:17:39.687 "enable_placement_id": 0, 00:17:39.687 "enable_zerocopy_send_server": true, 00:17:39.687 "enable_zerocopy_send_client": false, 00:17:39.687 "zerocopy_threshold": 0, 00:17:39.687 "tls_version": 0, 00:17:39.687 "enable_ktls": false 00:17:39.687 } 00:17:39.687 } 00:17:39.687 ] 00:17:39.687 }, 00:17:39.687 { 00:17:39.687 "subsystem": "vmd", 00:17:39.687 "config": [] 00:17:39.687 }, 00:17:39.687 { 00:17:39.687 "subsystem": "accel", 00:17:39.687 "config": [ 00:17:39.687 { 00:17:39.687 "method": "accel_set_options", 00:17:39.687 "params": { 00:17:39.687 "small_cache_size": 128, 00:17:39.687 "large_cache_size": 16, 00:17:39.687 "task_count": 2048, 00:17:39.687 "sequence_count": 2048, 00:17:39.687 "buf_count": 2048 00:17:39.687 } 00:17:39.687 } 00:17:39.687 ] 00:17:39.687 }, 00:17:39.687 { 00:17:39.687 "subsystem": "bdev", 00:17:39.687 "config": [ 00:17:39.687 { 00:17:39.687 "method": "bdev_set_options", 00:17:39.687 "params": { 00:17:39.687 "bdev_io_pool_size": 65535, 00:17:39.687 "bdev_io_cache_size": 256, 00:17:39.687 "bdev_auto_examine": true, 00:17:39.688 "iobuf_small_cache_size": 128, 00:17:39.688 "iobuf_large_cache_size": 16 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "bdev_raid_set_options", 00:17:39.688 "params": { 00:17:39.688 "process_window_size_kb": 1024, 00:17:39.688 "process_max_bandwidth_mb_sec": 0 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "bdev_iscsi_set_options", 00:17:39.688 "params": { 00:17:39.688 "timeout_sec": 30 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "bdev_nvme_set_options", 00:17:39.688 "params": { 00:17:39.688 "action_on_timeout": "none", 00:17:39.688 "timeout_us": 0, 00:17:39.688 "timeout_admin_us": 0, 00:17:39.688 "keep_alive_timeout_ms": 10000, 00:17:39.688 "arbitration_burst": 0, 00:17:39.688 "low_priority_weight": 0, 00:17:39.688 "medium_priority_weight": 0, 00:17:39.688 "high_priority_weight": 0, 00:17:39.688 "nvme_adminq_poll_period_us": 10000, 00:17:39.688 "nvme_ioq_poll_period_us": 0, 00:17:39.688 "io_queue_requests": 0, 00:17:39.688 "delay_cmd_submit": true, 00:17:39.688 "transport_retry_count": 4, 00:17:39.688 "bdev_retry_count": 3, 00:17:39.688 "transport_ack_timeout": 0, 00:17:39.688 "ctrlr_loss_timeout_sec": 0, 00:17:39.688 "reconnect_delay_sec": 0, 00:17:39.688 "fast_io_fail_timeout_sec": 0, 00:17:39.688 "disable_auto_failback": false, 00:17:39.688 "generate_uuids": false, 00:17:39.688 "transport_tos": 0, 00:17:39.688 "nvme_error_stat": false, 00:17:39.688 "rdma_srq_size": 0, 00:17:39.688 "io_path_stat": false, 00:17:39.688 "allow_accel_sequence": false, 00:17:39.688 "rdma_max_cq_size": 0, 00:17:39.688 "rdma_cm_event_timeout_ms": 0, 00:17:39.688 "dhchap_digests": [ 00:17:39.688 "sha256", 00:17:39.688 "sha384", 00:17:39.688 "sha512" 00:17:39.688 ], 00:17:39.688 "dhchap_dhgroups": [ 00:17:39.688 "null", 00:17:39.688 "ffdhe2048", 00:17:39.688 "ffdhe3072", 00:17:39.688 "ffdhe4096", 00:17:39.688 "ffdhe6144", 00:17:39.688 "ffdhe8192" 00:17:39.688 ] 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "bdev_nvme_set_hotplug", 00:17:39.688 "params": { 00:17:39.688 "period_us": 100000, 00:17:39.688 "enable": false 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "bdev_malloc_create", 00:17:39.688 "params": { 00:17:39.688 "name": "malloc0", 00:17:39.688 "num_blocks": 8192, 00:17:39.688 "block_size": 4096, 00:17:39.688 "physical_block_size": 4096, 00:17:39.688 "uuid": "950d5d3a-28c0-4b1c-92e3-6177c1d634c5", 00:17:39.688 "optimal_io_boundary": 0, 00:17:39.688 "md_size": 0, 00:17:39.688 "dif_type": 0, 00:17:39.688 "dif_is_head_of_md": false, 00:17:39.688 "dif_pi_format": 0 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "bdev_wait_for_examine" 00:17:39.688 } 00:17:39.688 ] 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "subsystem": "nbd", 00:17:39.688 "config": [] 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "subsystem": "scheduler", 00:17:39.688 "config": [ 00:17:39.688 { 00:17:39.688 "method": "framework_set_scheduler", 00:17:39.688 "params": { 00:17:39.688 "name": "static" 00:17:39.688 } 00:17:39.688 } 00:17:39.688 ] 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "subsystem": "nvmf", 00:17:39.688 "config": [ 00:17:39.688 { 00:17:39.688 "method": "nvmf_set_config", 00:17:39.688 "params": { 00:17:39.688 "discovery_filter": "match_any", 00:17:39.688 "admin_cmd_passthru": { 00:17:39.688 "identify_ctrlr": false 00:17:39.688 } 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "nvmf_set_max_subsystems", 00:17:39.688 "params": { 00:17:39.688 "max_subsystems": 1024 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "nvmf_set_crdt", 00:17:39.688 "params": { 00:17:39.688 "crdt1": 0, 00:17:39.688 "crdt2": 0, 00:17:39.688 "crdt3": 0 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "nvmf_create_transport", 00:17:39.688 "params": { 00:17:39.688 "trtype": "TCP", 00:17:39.688 "max_queue_depth": 128, 00:17:39.688 "max_io_qpairs_per_ctrlr": 127, 00:17:39.688 "in_capsule_data_size": 4096, 00:17:39.688 "max_io_size": 131072, 00:17:39.688 "io_unit_size": 131072, 00:17:39.688 "max_aq_depth": 128, 00:17:39.688 "num_shared_buffers": 511, 00:17:39.688 "buf_cache_size": 4294967295, 00:17:39.688 "dif_insert_or_strip": false, 00:17:39.688 "zcopy": false, 00:17:39.688 "c2h_success": false, 00:17:39.688 "sock_priority": 0, 00:17:39.688 "abort_timeout_sec": 1, 00:17:39.688 "ack_timeout": 0, 00:17:39.688 "data_wr_pool_size": 0 00:17:39.688 } 00:17:39.688 }, 00:17:39.688 { 00:17:39.688 "method": "nvmf_create_subsystem", 00:17:39.688 "params": { 00:17:39.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.688 "allow_any_host": false, 00:17:39.688 "serial_number": "SPDK00000000000001", 00:17:39.688 "model_number": "SPDK bdev Controller", 00:17:39.688 "max_namespaces": 10, 00:17:39.688 "min_cntlid": 1, 00:17:39.688 "max_cntlid": 65519, 00:17:39.688 "ana_reporting": false 00:17:39.688 } 00:17:39.689 }, 00:17:39.689 { 00:17:39.689 "method": "nvmf_subsystem_add_host", 00:17:39.689 "params": { 00:17:39.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.689 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.689 "psk": "/tmp/tmp.dZ16jTHSF8" 00:17:39.689 } 00:17:39.689 }, 00:17:39.689 { 00:17:39.689 "method": "nvmf_subsystem_add_ns", 00:17:39.689 "params": { 00:17:39.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.689 "namespace": { 00:17:39.689 "nsid": 1, 00:17:39.689 "bdev_name": "malloc0", 00:17:39.689 "nguid": "950D5D3A28C04B1C92E36177C1D634C5", 00:17:39.689 "uuid": "950d5d3a-28c0-4b1c-92e3-6177c1d634c5", 00:17:39.689 "no_auto_visible": false 00:17:39.689 } 00:17:39.689 } 00:17:39.689 }, 00:17:39.689 { 00:17:39.689 "method": "nvmf_subsystem_add_listener", 00:17:39.689 "params": { 00:17:39.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.689 "listen_address": { 00:17:39.689 "trtype": "TCP", 00:17:39.689 "adrfam": "IPv4", 00:17:39.689 "traddr": "10.0.0.2", 00:17:39.689 "trsvcid": "4420" 00:17:39.689 }, 00:17:39.689 "secure_channel": true 00:17:39.689 } 00:17:39.689 } 00:17:39.689 ] 00:17:39.689 } 00:17:39.689 ] 00:17:39.689 }' 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1613672 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1613672 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1613672 ']' 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.689 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.689 [2024-07-24 20:45:35.167641] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:39.689 [2024-07-24 20:45:35.167730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.689 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.689 [2024-07-24 20:45:35.234996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.947 [2024-07-24 20:45:35.352500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.947 [2024-07-24 20:45:35.352557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.947 [2024-07-24 20:45:35.352574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.947 [2024-07-24 20:45:35.352587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.947 [2024-07-24 20:45:35.352599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.947 [2024-07-24 20:45:35.352677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.206 [2024-07-24 20:45:35.596711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.206 [2024-07-24 20:45:35.620916] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:40.206 [2024-07-24 20:45:35.636979] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.206 [2024-07-24 20:45:35.637267] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1613820 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1613820 /var/tmp/bdevperf.sock 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1613820 ']' 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.772 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:40.772 "subsystems": [ 00:17:40.773 { 00:17:40.773 "subsystem": "keyring", 00:17:40.773 "config": [] 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "subsystem": "iobuf", 00:17:40.773 "config": [ 00:17:40.773 { 00:17:40.773 "method": "iobuf_set_options", 00:17:40.773 "params": { 00:17:40.773 "small_pool_count": 8192, 00:17:40.773 "large_pool_count": 1024, 00:17:40.773 "small_bufsize": 8192, 00:17:40.773 "large_bufsize": 135168 00:17:40.773 } 00:17:40.773 } 00:17:40.773 ] 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "subsystem": "sock", 00:17:40.773 "config": [ 00:17:40.773 { 00:17:40.773 "method": "sock_set_default_impl", 00:17:40.773 "params": { 00:17:40.773 "impl_name": "posix" 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "sock_impl_set_options", 00:17:40.773 "params": { 00:17:40.773 "impl_name": "ssl", 00:17:40.773 "recv_buf_size": 4096, 00:17:40.773 "send_buf_size": 4096, 00:17:40.773 "enable_recv_pipe": true, 00:17:40.773 "enable_quickack": false, 00:17:40.773 "enable_placement_id": 0, 00:17:40.773 "enable_zerocopy_send_server": true, 00:17:40.773 "enable_zerocopy_send_client": false, 00:17:40.773 "zerocopy_threshold": 0, 00:17:40.773 "tls_version": 0, 00:17:40.773 "enable_ktls": false 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "sock_impl_set_options", 00:17:40.773 "params": { 00:17:40.773 "impl_name": "posix", 00:17:40.773 "recv_buf_size": 2097152, 00:17:40.773 "send_buf_size": 2097152, 00:17:40.773 "enable_recv_pipe": true, 00:17:40.773 "enable_quickack": false, 00:17:40.773 "enable_placement_id": 0, 00:17:40.773 "enable_zerocopy_send_server": true, 00:17:40.773 "enable_zerocopy_send_client": false, 00:17:40.773 "zerocopy_threshold": 0, 00:17:40.773 "tls_version": 0, 00:17:40.773 "enable_ktls": false 00:17:40.773 } 00:17:40.773 } 00:17:40.773 ] 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "subsystem": "vmd", 00:17:40.773 "config": [] 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "subsystem": "accel", 00:17:40.773 "config": [ 00:17:40.773 { 00:17:40.773 "method": "accel_set_options", 00:17:40.773 "params": { 00:17:40.773 "small_cache_size": 128, 00:17:40.773 "large_cache_size": 16, 00:17:40.773 "task_count": 2048, 00:17:40.773 "sequence_count": 2048, 00:17:40.773 "buf_count": 2048 00:17:40.773 } 00:17:40.773 } 00:17:40.773 ] 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "subsystem": "bdev", 00:17:40.773 "config": [ 00:17:40.773 { 00:17:40.773 "method": "bdev_set_options", 00:17:40.773 "params": { 00:17:40.773 "bdev_io_pool_size": 65535, 00:17:40.773 "bdev_io_cache_size": 256, 00:17:40.773 "bdev_auto_examine": true, 00:17:40.773 "iobuf_small_cache_size": 128, 00:17:40.773 "iobuf_large_cache_size": 16 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "bdev_raid_set_options", 00:17:40.773 "params": { 00:17:40.773 "process_window_size_kb": 1024, 00:17:40.773 "process_max_bandwidth_mb_sec": 0 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "bdev_iscsi_set_options", 00:17:40.773 "params": { 00:17:40.773 "timeout_sec": 30 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "bdev_nvme_set_options", 00:17:40.773 "params": { 00:17:40.773 "action_on_timeout": "none", 00:17:40.773 "timeout_us": 0, 00:17:40.773 "timeout_admin_us": 0, 00:17:40.773 "keep_alive_timeout_ms": 10000, 00:17:40.773 "arbitration_burst": 0, 00:17:40.773 "low_priority_weight": 0, 00:17:40.773 "medium_priority_weight": 0, 00:17:40.773 "high_priority_weight": 0, 00:17:40.773 "nvme_adminq_poll_period_us": 10000, 00:17:40.773 "nvme_ioq_poll_period_us": 0, 00:17:40.773 "io_queue_requests": 512, 00:17:40.773 "delay_cmd_submit": true, 00:17:40.773 "transport_retry_count": 4, 00:17:40.773 "bdev_retry_count": 3, 00:17:40.773 "transport_ack_timeout": 0, 00:17:40.773 "ctrlr_loss_timeout_sec": 0, 00:17:40.773 "reconnect_delay_sec": 0, 00:17:40.773 "fast_io_fail_timeout_sec": 0, 00:17:40.773 "disable_auto_failback": false, 00:17:40.773 "generate_uuids": false, 00:17:40.773 "transport_tos": 0, 00:17:40.773 "nvme_error_stat": false, 00:17:40.773 "rdma_srq_size": 0, 00:17:40.773 "io_path_stat": false, 00:17:40.773 "allow_accel_sequence": false, 00:17:40.773 "rdma_max_cq_size": 0, 00:17:40.773 "rdma_cm_event_timeout_ms": 0, 00:17:40.773 "dhchap_digests": [ 00:17:40.773 "sha256", 00:17:40.773 "sha384", 00:17:40.773 "sha512" 00:17:40.773 ], 00:17:40.773 "dhchap_dhgroups": [ 00:17:40.773 "null", 00:17:40.773 "ffdhe2048", 00:17:40.773 "ffdhe3072", 00:17:40.773 "ffdhe4096", 00:17:40.773 "ffdhe6144", 00:17:40.773 "ffdhe8192" 00:17:40.773 ] 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "bdev_nvme_attach_controller", 00:17:40.773 "params": { 00:17:40.773 "name": "TLSTEST", 00:17:40.773 "trtype": "TCP", 00:17:40.773 "adrfam": "IPv4", 00:17:40.773 "traddr": "10.0.0.2", 00:17:40.773 "trsvcid": "4420", 00:17:40.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.773 "prchk_reftag": false, 00:17:40.773 "prchk_guard": false, 00:17:40.773 "ctrlr_loss_timeout_sec": 0, 00:17:40.773 "reconnect_delay_sec": 0, 00:17:40.773 "fast_io_fail_timeout_sec": 0, 00:17:40.773 "psk": "/tmp/tmp.dZ16jTHSF8", 00:17:40.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.773 "hdgst": false, 00:17:40.773 "ddgst": false 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "bdev_nvme_set_hotplug", 00:17:40.773 "params": { 00:17:40.773 "period_us": 100000, 00:17:40.773 "enable": false 00:17:40.773 } 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "method": "bdev_wait_for_examine" 00:17:40.773 } 00:17:40.773 ] 00:17:40.773 }, 00:17:40.773 { 00:17:40.773 "subsystem": "nbd", 00:17:40.773 "config": [] 00:17:40.773 } 00:17:40.773 ] 00:17:40.773 }' 00:17:40.773 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.773 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.773 [2024-07-24 20:45:36.231867] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:40.773 [2024-07-24 20:45:36.231946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613820 ] 00:17:40.773 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.773 [2024-07-24 20:45:36.288322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.032 [2024-07-24 20:45:36.396240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.032 [2024-07-24 20:45:36.567107] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.032 [2024-07-24 20:45:36.567276] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.965 20:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.965 20:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:41.965 20:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:41.965 Running I/O for 10 seconds... 00:17:51.926 00:17:51.926 Latency(us) 00:17:51.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.926 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:51.926 Verification LBA range: start 0x0 length 0x2000 00:17:51.926 TLSTESTn1 : 10.04 2559.41 10.00 0.00 0.00 49898.76 6747.78 65633.09 00:17:51.926 =================================================================================================================== 00:17:51.926 Total : 2559.41 10.00 0.00 0.00 49898.76 6747.78 65633.09 00:17:51.926 0 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1613820 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1613820 ']' 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1613820 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613820 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613820' 00:17:51.926 killing process with pid 1613820 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1613820 00:17:51.926 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.926 00:17:51.926 Latency(us) 00:17:51.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.926 =================================================================================================================== 00:17:51.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.926 [2024-07-24 20:45:47.407650] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:51.926 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1613820 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1613672 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1613672 ']' 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1613672 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1613672 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1613672' 00:17:52.183 killing process with pid 1613672 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1613672 00:17:52.183 [2024-07-24 20:45:47.703543] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:52.183 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1613672 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1615169 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1615169 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1615169 ']' 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.441 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.698 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.698 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.698 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.698 [2024-07-24 20:45:48.055757] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:52.698 [2024-07-24 20:45:48.055844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.698 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.698 [2024-07-24 20:45:48.123319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.699 [2024-07-24 20:45:48.235823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.699 [2024-07-24 20:45:48.235886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.699 [2024-07-24 20:45:48.235902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.699 [2024-07-24 20:45:48.235916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.699 [2024-07-24 20:45:48.235927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.699 [2024-07-24 20:45:48.235958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.632 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.632 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:53.632 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.632 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:53.632 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.632 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.632 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.dZ16jTHSF8 00:17:53.632 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dZ16jTHSF8 00:17:53.632 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.928 [2024-07-24 20:45:49.238205] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.928 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:54.185 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.185 [2024-07-24 20:45:49.731541] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.185 [2024-07-24 20:45:49.731792] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.443 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.443 malloc0 00:17:54.443 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.700 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dZ16jTHSF8 00:17:54.957 [2024-07-24 20:45:50.489693] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:54.957 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1615461 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1615461 /var/tmp/bdevperf.sock 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1615461 ']' 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.958 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.215 [2024-07-24 20:45:50.554541] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:55.215 [2024-07-24 20:45:50.554625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615461 ] 00:17:55.215 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.215 [2024-07-24 20:45:50.619749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.215 [2024-07-24 20:45:50.739801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.472 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:55.472 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:55.472 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dZ16jTHSF8 00:17:55.730 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:55.987 [2024-07-24 20:45:51.341190] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.987 nvme0n1 00:17:55.987 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.244 Running I/O for 1 seconds... 00:17:57.175 00:17:57.175 Latency(us) 00:17:57.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.175 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:57.175 Verification LBA range: start 0x0 length 0x2000 00:17:57.175 nvme0n1 : 1.03 2990.60 11.68 0.00 0.00 42258.49 8738.13 40001.23 00:17:57.175 =================================================================================================================== 00:17:57.175 Total : 2990.60 11.68 0.00 0.00 42258.49 8738.13 40001.23 00:17:57.175 0 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1615461 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1615461 ']' 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1615461 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615461 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615461' 00:17:57.175 killing process with pid 1615461 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1615461 00:17:57.175 Received shutdown signal, test time was about 1.000000 seconds 00:17:57.175 00:17:57.175 Latency(us) 00:17:57.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.175 =================================================================================================================== 00:17:57.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.175 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1615461 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1615169 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1615169 ']' 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1615169 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615169 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615169' 00:17:57.433 killing process with pid 1615169 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1615169 00:17:57.433 [2024-07-24 20:45:52.912990] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:57.433 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1615169 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1615860 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1615860 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1615860 ']' 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.690 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.965 [2024-07-24 20:45:53.258958] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:57.965 [2024-07-24 20:45:53.259065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.965 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.965 [2024-07-24 20:45:53.323803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.965 [2024-07-24 20:45:53.433460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.965 [2024-07-24 20:45:53.433538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.965 [2024-07-24 20:45:53.433552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.965 [2024-07-24 20:45:53.433562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.965 [2024-07-24 20:45:53.433572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.965 [2024-07-24 20:45:53.433612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.224 [2024-07-24 20:45:53.582753] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.224 malloc0 00:17:58.224 [2024-07-24 20:45:53.615671] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:58.224 [2024-07-24 20:45:53.626474] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.224 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1615890 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1615890 /var/tmp/bdevperf.sock 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1615890 ']' 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.225 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.225 [2024-07-24 20:45:53.693512] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:17:58.225 [2024-07-24 20:45:53.693605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615890 ] 00:17:58.225 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.225 [2024-07-24 20:45:53.754306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.482 [2024-07-24 20:45:53.872916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.482 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.482 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:58.482 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dZ16jTHSF8 00:17:58.739 20:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:58.997 [2024-07-24 20:45:54.531855] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.254 nvme0n1 00:17:59.254 20:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.254 Running I/O for 1 seconds... 00:18:00.187 00:18:00.187 Latency(us) 00:18:00.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.187 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:00.187 Verification LBA range: start 0x0 length 0x2000 00:18:00.187 nvme0n1 : 1.03 3510.77 13.71 0.00 0.00 36015.81 6505.05 45438.29 00:18:00.187 =================================================================================================================== 00:18:00.187 Total : 3510.77 13.71 0.00 0.00 36015.81 6505.05 45438.29 00:18:00.187 0 00:18:00.445 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:00.445 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.445 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.445 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.445 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:00.445 "subsystems": [ 00:18:00.445 { 00:18:00.445 "subsystem": "keyring", 00:18:00.445 "config": [ 00:18:00.445 { 00:18:00.445 "method": "keyring_file_add_key", 00:18:00.445 "params": { 00:18:00.445 "name": "key0", 00:18:00.445 "path": "/tmp/tmp.dZ16jTHSF8" 00:18:00.445 } 00:18:00.445 } 00:18:00.445 ] 00:18:00.445 }, 00:18:00.445 { 00:18:00.445 "subsystem": "iobuf", 00:18:00.445 "config": [ 00:18:00.445 { 00:18:00.445 "method": "iobuf_set_options", 00:18:00.445 "params": { 00:18:00.445 "small_pool_count": 8192, 00:18:00.445 "large_pool_count": 1024, 00:18:00.445 "small_bufsize": 8192, 00:18:00.445 "large_bufsize": 135168 00:18:00.445 } 00:18:00.445 } 00:18:00.445 ] 00:18:00.445 }, 00:18:00.445 { 00:18:00.445 "subsystem": "sock", 00:18:00.445 "config": [ 00:18:00.445 { 00:18:00.445 "method": "sock_set_default_impl", 00:18:00.445 "params": { 00:18:00.445 "impl_name": "posix" 00:18:00.445 } 00:18:00.445 }, 00:18:00.445 { 00:18:00.445 "method": "sock_impl_set_options", 00:18:00.445 "params": { 00:18:00.445 "impl_name": "ssl", 00:18:00.445 "recv_buf_size": 4096, 00:18:00.445 "send_buf_size": 4096, 00:18:00.445 "enable_recv_pipe": true, 00:18:00.445 "enable_quickack": false, 00:18:00.445 "enable_placement_id": 0, 00:18:00.445 "enable_zerocopy_send_server": true, 00:18:00.445 "enable_zerocopy_send_client": false, 00:18:00.445 "zerocopy_threshold": 0, 00:18:00.445 "tls_version": 0, 00:18:00.445 "enable_ktls": false 00:18:00.445 } 00:18:00.445 }, 00:18:00.445 { 00:18:00.445 "method": "sock_impl_set_options", 00:18:00.445 "params": { 00:18:00.445 "impl_name": "posix", 00:18:00.445 "recv_buf_size": 2097152, 00:18:00.445 "send_buf_size": 2097152, 00:18:00.445 "enable_recv_pipe": true, 00:18:00.445 "enable_quickack": false, 00:18:00.445 "enable_placement_id": 0, 00:18:00.445 "enable_zerocopy_send_server": true, 00:18:00.445 "enable_zerocopy_send_client": false, 00:18:00.445 "zerocopy_threshold": 0, 00:18:00.445 "tls_version": 0, 00:18:00.445 "enable_ktls": false 00:18:00.445 } 00:18:00.445 } 00:18:00.445 ] 00:18:00.445 }, 00:18:00.445 { 00:18:00.445 "subsystem": "vmd", 00:18:00.445 "config": [] 00:18:00.445 }, 00:18:00.445 { 00:18:00.445 "subsystem": "accel", 00:18:00.445 "config": [ 00:18:00.445 { 00:18:00.445 "method": "accel_set_options", 00:18:00.445 "params": { 00:18:00.445 "small_cache_size": 128, 00:18:00.446 "large_cache_size": 16, 00:18:00.446 "task_count": 2048, 00:18:00.446 "sequence_count": 2048, 00:18:00.446 "buf_count": 2048 00:18:00.446 } 00:18:00.446 } 00:18:00.446 ] 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "subsystem": "bdev", 00:18:00.446 "config": [ 00:18:00.446 { 00:18:00.446 "method": "bdev_set_options", 00:18:00.446 "params": { 00:18:00.446 "bdev_io_pool_size": 65535, 00:18:00.446 "bdev_io_cache_size": 256, 00:18:00.446 "bdev_auto_examine": true, 00:18:00.446 "iobuf_small_cache_size": 128, 00:18:00.446 "iobuf_large_cache_size": 16 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "bdev_raid_set_options", 00:18:00.446 "params": { 00:18:00.446 "process_window_size_kb": 1024, 00:18:00.446 "process_max_bandwidth_mb_sec": 0 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "bdev_iscsi_set_options", 00:18:00.446 "params": { 00:18:00.446 "timeout_sec": 30 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "bdev_nvme_set_options", 00:18:00.446 "params": { 00:18:00.446 "action_on_timeout": "none", 00:18:00.446 "timeout_us": 0, 00:18:00.446 "timeout_admin_us": 0, 00:18:00.446 "keep_alive_timeout_ms": 10000, 00:18:00.446 "arbitration_burst": 0, 00:18:00.446 "low_priority_weight": 0, 00:18:00.446 "medium_priority_weight": 0, 00:18:00.446 "high_priority_weight": 0, 00:18:00.446 "nvme_adminq_poll_period_us": 10000, 00:18:00.446 "nvme_ioq_poll_period_us": 0, 00:18:00.446 "io_queue_requests": 0, 00:18:00.446 "delay_cmd_submit": true, 00:18:00.446 "transport_retry_count": 4, 00:18:00.446 "bdev_retry_count": 3, 00:18:00.446 "transport_ack_timeout": 0, 00:18:00.446 "ctrlr_loss_timeout_sec": 0, 00:18:00.446 "reconnect_delay_sec": 0, 00:18:00.446 "fast_io_fail_timeout_sec": 0, 00:18:00.446 "disable_auto_failback": false, 00:18:00.446 "generate_uuids": false, 00:18:00.446 "transport_tos": 0, 00:18:00.446 "nvme_error_stat": false, 00:18:00.446 "rdma_srq_size": 0, 00:18:00.446 "io_path_stat": false, 00:18:00.446 "allow_accel_sequence": false, 00:18:00.446 "rdma_max_cq_size": 0, 00:18:00.446 "rdma_cm_event_timeout_ms": 0, 00:18:00.446 "dhchap_digests": [ 00:18:00.446 "sha256", 00:18:00.446 "sha384", 00:18:00.446 "sha512" 00:18:00.446 ], 00:18:00.446 "dhchap_dhgroups": [ 00:18:00.446 "null", 00:18:00.446 "ffdhe2048", 00:18:00.446 "ffdhe3072", 00:18:00.446 "ffdhe4096", 00:18:00.446 "ffdhe6144", 00:18:00.446 "ffdhe8192" 00:18:00.446 ] 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "bdev_nvme_set_hotplug", 00:18:00.446 "params": { 00:18:00.446 "period_us": 100000, 00:18:00.446 "enable": false 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "bdev_malloc_create", 00:18:00.446 "params": { 00:18:00.446 "name": "malloc0", 00:18:00.446 "num_blocks": 8192, 00:18:00.446 "block_size": 4096, 00:18:00.446 "physical_block_size": 4096, 00:18:00.446 "uuid": "5c156dea-2e98-46a7-bacd-063f0bf5086b", 00:18:00.446 "optimal_io_boundary": 0, 00:18:00.446 "md_size": 0, 00:18:00.446 "dif_type": 0, 00:18:00.446 "dif_is_head_of_md": false, 00:18:00.446 "dif_pi_format": 0 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "bdev_wait_for_examine" 00:18:00.446 } 00:18:00.446 ] 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "subsystem": "nbd", 00:18:00.446 "config": [] 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "subsystem": "scheduler", 00:18:00.446 "config": [ 00:18:00.446 { 00:18:00.446 "method": "framework_set_scheduler", 00:18:00.446 "params": { 00:18:00.446 "name": "static" 00:18:00.446 } 00:18:00.446 } 00:18:00.446 ] 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "subsystem": "nvmf", 00:18:00.446 "config": [ 00:18:00.446 { 00:18:00.446 "method": "nvmf_set_config", 00:18:00.446 "params": { 00:18:00.446 "discovery_filter": "match_any", 00:18:00.446 "admin_cmd_passthru": { 00:18:00.446 "identify_ctrlr": false 00:18:00.446 } 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "nvmf_set_max_subsystems", 00:18:00.446 "params": { 00:18:00.446 "max_subsystems": 1024 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "nvmf_set_crdt", 00:18:00.446 "params": { 00:18:00.446 "crdt1": 0, 00:18:00.446 "crdt2": 0, 00:18:00.446 "crdt3": 0 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "nvmf_create_transport", 00:18:00.446 "params": { 00:18:00.446 "trtype": "TCP", 00:18:00.446 "max_queue_depth": 128, 00:18:00.446 "max_io_qpairs_per_ctrlr": 127, 00:18:00.446 "in_capsule_data_size": 4096, 00:18:00.446 "max_io_size": 131072, 00:18:00.446 "io_unit_size": 131072, 00:18:00.446 "max_aq_depth": 128, 00:18:00.446 "num_shared_buffers": 511, 00:18:00.446 "buf_cache_size": 4294967295, 00:18:00.446 "dif_insert_or_strip": false, 00:18:00.446 "zcopy": false, 00:18:00.446 "c2h_success": false, 00:18:00.446 "sock_priority": 0, 00:18:00.446 "abort_timeout_sec": 1, 00:18:00.446 "ack_timeout": 0, 00:18:00.446 "data_wr_pool_size": 0 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "nvmf_create_subsystem", 00:18:00.446 "params": { 00:18:00.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.446 "allow_any_host": false, 00:18:00.446 "serial_number": "00000000000000000000", 00:18:00.446 "model_number": "SPDK bdev Controller", 00:18:00.446 "max_namespaces": 32, 00:18:00.446 "min_cntlid": 1, 00:18:00.446 "max_cntlid": 65519, 00:18:00.446 "ana_reporting": false 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "nvmf_subsystem_add_host", 00:18:00.446 "params": { 00:18:00.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.446 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.446 "psk": "key0" 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "nvmf_subsystem_add_ns", 00:18:00.446 "params": { 00:18:00.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.446 "namespace": { 00:18:00.446 "nsid": 1, 00:18:00.446 "bdev_name": "malloc0", 00:18:00.446 "nguid": "5C156DEA2E9846A7BACD063F0BF5086B", 00:18:00.446 "uuid": "5c156dea-2e98-46a7-bacd-063f0bf5086b", 00:18:00.446 "no_auto_visible": false 00:18:00.446 } 00:18:00.446 } 00:18:00.446 }, 00:18:00.446 { 00:18:00.446 "method": "nvmf_subsystem_add_listener", 00:18:00.446 "params": { 00:18:00.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.446 "listen_address": { 00:18:00.446 "trtype": "TCP", 00:18:00.446 "adrfam": "IPv4", 00:18:00.446 "traddr": "10.0.0.2", 00:18:00.446 "trsvcid": "4420" 00:18:00.446 }, 00:18:00.446 "secure_channel": false, 00:18:00.446 "sock_impl": "ssl" 00:18:00.446 } 00:18:00.446 } 00:18:00.446 ] 00:18:00.446 } 00:18:00.446 ] 00:18:00.446 }' 00:18:00.446 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:00.705 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:00.705 "subsystems": [ 00:18:00.705 { 00:18:00.705 "subsystem": "keyring", 00:18:00.705 "config": [ 00:18:00.705 { 00:18:00.705 "method": "keyring_file_add_key", 00:18:00.705 "params": { 00:18:00.705 "name": "key0", 00:18:00.705 "path": "/tmp/tmp.dZ16jTHSF8" 00:18:00.705 } 00:18:00.705 } 00:18:00.705 ] 00:18:00.705 }, 00:18:00.705 { 00:18:00.705 "subsystem": "iobuf", 00:18:00.705 "config": [ 00:18:00.705 { 00:18:00.705 "method": "iobuf_set_options", 00:18:00.705 "params": { 00:18:00.705 "small_pool_count": 8192, 00:18:00.705 "large_pool_count": 1024, 00:18:00.705 "small_bufsize": 8192, 00:18:00.705 "large_bufsize": 135168 00:18:00.705 } 00:18:00.705 } 00:18:00.705 ] 00:18:00.705 }, 00:18:00.705 { 00:18:00.705 "subsystem": "sock", 00:18:00.705 "config": [ 00:18:00.705 { 00:18:00.705 "method": "sock_set_default_impl", 00:18:00.705 "params": { 00:18:00.705 "impl_name": "posix" 00:18:00.705 } 00:18:00.705 }, 00:18:00.705 { 00:18:00.705 "method": "sock_impl_set_options", 00:18:00.705 "params": { 00:18:00.705 "impl_name": "ssl", 00:18:00.705 "recv_buf_size": 4096, 00:18:00.705 "send_buf_size": 4096, 00:18:00.705 "enable_recv_pipe": true, 00:18:00.705 "enable_quickack": false, 00:18:00.705 "enable_placement_id": 0, 00:18:00.705 "enable_zerocopy_send_server": true, 00:18:00.705 "enable_zerocopy_send_client": false, 00:18:00.705 "zerocopy_threshold": 0, 00:18:00.705 "tls_version": 0, 00:18:00.705 "enable_ktls": false 00:18:00.705 } 00:18:00.705 }, 00:18:00.705 { 00:18:00.705 "method": "sock_impl_set_options", 00:18:00.705 "params": { 00:18:00.705 "impl_name": "posix", 00:18:00.706 "recv_buf_size": 2097152, 00:18:00.706 "send_buf_size": 2097152, 00:18:00.706 "enable_recv_pipe": true, 00:18:00.706 "enable_quickack": false, 00:18:00.706 "enable_placement_id": 0, 00:18:00.706 "enable_zerocopy_send_server": true, 00:18:00.706 "enable_zerocopy_send_client": false, 00:18:00.706 "zerocopy_threshold": 0, 00:18:00.706 "tls_version": 0, 00:18:00.706 "enable_ktls": false 00:18:00.706 } 00:18:00.706 } 00:18:00.706 ] 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "subsystem": "vmd", 00:18:00.706 "config": [] 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "subsystem": "accel", 00:18:00.706 "config": [ 00:18:00.706 { 00:18:00.706 "method": "accel_set_options", 00:18:00.706 "params": { 00:18:00.706 "small_cache_size": 128, 00:18:00.706 "large_cache_size": 16, 00:18:00.706 "task_count": 2048, 00:18:00.706 "sequence_count": 2048, 00:18:00.706 "buf_count": 2048 00:18:00.706 } 00:18:00.706 } 00:18:00.706 ] 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "subsystem": "bdev", 00:18:00.706 "config": [ 00:18:00.706 { 00:18:00.706 "method": "bdev_set_options", 00:18:00.706 "params": { 00:18:00.706 "bdev_io_pool_size": 65535, 00:18:00.706 "bdev_io_cache_size": 256, 00:18:00.706 "bdev_auto_examine": true, 00:18:00.706 "iobuf_small_cache_size": 128, 00:18:00.706 "iobuf_large_cache_size": 16 00:18:00.706 } 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "method": "bdev_raid_set_options", 00:18:00.706 "params": { 00:18:00.706 "process_window_size_kb": 1024, 00:18:00.706 "process_max_bandwidth_mb_sec": 0 00:18:00.706 } 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "method": "bdev_iscsi_set_options", 00:18:00.706 "params": { 00:18:00.706 "timeout_sec": 30 00:18:00.706 } 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "method": "bdev_nvme_set_options", 00:18:00.706 "params": { 00:18:00.706 "action_on_timeout": "none", 00:18:00.706 "timeout_us": 0, 00:18:00.706 "timeout_admin_us": 0, 00:18:00.706 "keep_alive_timeout_ms": 10000, 00:18:00.706 "arbitration_burst": 0, 00:18:00.706 "low_priority_weight": 0, 00:18:00.706 "medium_priority_weight": 0, 00:18:00.706 "high_priority_weight": 0, 00:18:00.706 "nvme_adminq_poll_period_us": 10000, 00:18:00.706 "nvme_ioq_poll_period_us": 0, 00:18:00.706 "io_queue_requests": 512, 00:18:00.706 "delay_cmd_submit": true, 00:18:00.706 "transport_retry_count": 4, 00:18:00.706 "bdev_retry_count": 3, 00:18:00.706 "transport_ack_timeout": 0, 00:18:00.706 "ctrlr_loss_timeout_sec": 0, 00:18:00.706 "reconnect_delay_sec": 0, 00:18:00.706 "fast_io_fail_timeout_sec": 0, 00:18:00.706 "disable_auto_failback": false, 00:18:00.706 "generate_uuids": false, 00:18:00.706 "transport_tos": 0, 00:18:00.706 "nvme_error_stat": false, 00:18:00.706 "rdma_srq_size": 0, 00:18:00.706 "io_path_stat": false, 00:18:00.706 "allow_accel_sequence": false, 00:18:00.706 "rdma_max_cq_size": 0, 00:18:00.706 "rdma_cm_event_timeout_ms": 0, 00:18:00.706 "dhchap_digests": [ 00:18:00.706 "sha256", 00:18:00.706 "sha384", 00:18:00.706 "sha512" 00:18:00.706 ], 00:18:00.706 "dhchap_dhgroups": [ 00:18:00.706 "null", 00:18:00.706 "ffdhe2048", 00:18:00.706 "ffdhe3072", 00:18:00.706 "ffdhe4096", 00:18:00.706 "ffdhe6144", 00:18:00.706 "ffdhe8192" 00:18:00.706 ] 00:18:00.706 } 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "method": "bdev_nvme_attach_controller", 00:18:00.706 "params": { 00:18:00.706 "name": "nvme0", 00:18:00.706 "trtype": "TCP", 00:18:00.706 "adrfam": "IPv4", 00:18:00.706 "traddr": "10.0.0.2", 00:18:00.706 "trsvcid": "4420", 00:18:00.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.706 "prchk_reftag": false, 00:18:00.706 "prchk_guard": false, 00:18:00.706 "ctrlr_loss_timeout_sec": 0, 00:18:00.706 "reconnect_delay_sec": 0, 00:18:00.706 "fast_io_fail_timeout_sec": 0, 00:18:00.706 "psk": "key0", 00:18:00.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.706 "hdgst": false, 00:18:00.706 "ddgst": false 00:18:00.706 } 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "method": "bdev_nvme_set_hotplug", 00:18:00.706 "params": { 00:18:00.706 "period_us": 100000, 00:18:00.706 "enable": false 00:18:00.706 } 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "method": "bdev_enable_histogram", 00:18:00.706 "params": { 00:18:00.706 "name": "nvme0n1", 00:18:00.706 "enable": true 00:18:00.706 } 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "method": "bdev_wait_for_examine" 00:18:00.706 } 00:18:00.706 ] 00:18:00.706 }, 00:18:00.706 { 00:18:00.706 "subsystem": "nbd", 00:18:00.706 "config": [] 00:18:00.706 } 00:18:00.706 ] 00:18:00.706 }' 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1615890 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1615890 ']' 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1615890 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615890 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615890' 00:18:00.706 killing process with pid 1615890 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1615890 00:18:00.706 Received shutdown signal, test time was about 1.000000 seconds 00:18:00.706 00:18:00.706 Latency(us) 00:18:00.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.706 =================================================================================================================== 00:18:00.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.706 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1615890 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1615860 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1615860 ']' 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1615860 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615860 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615860' 00:18:00.964 killing process with pid 1615860 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1615860 00:18:00.964 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1615860 00:18:01.223 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:01.223 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.223 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:01.223 "subsystems": [ 00:18:01.223 { 00:18:01.223 "subsystem": "keyring", 00:18:01.223 "config": [ 00:18:01.223 { 00:18:01.223 "method": "keyring_file_add_key", 00:18:01.223 "params": { 00:18:01.223 "name": "key0", 00:18:01.223 "path": "/tmp/tmp.dZ16jTHSF8" 00:18:01.223 } 00:18:01.223 } 00:18:01.223 ] 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "subsystem": "iobuf", 00:18:01.223 "config": [ 00:18:01.223 { 00:18:01.223 "method": "iobuf_set_options", 00:18:01.223 "params": { 00:18:01.223 "small_pool_count": 8192, 00:18:01.223 "large_pool_count": 1024, 00:18:01.223 "small_bufsize": 8192, 00:18:01.223 "large_bufsize": 135168 00:18:01.223 } 00:18:01.223 } 00:18:01.223 ] 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "subsystem": "sock", 00:18:01.223 "config": [ 00:18:01.223 { 00:18:01.223 "method": "sock_set_default_impl", 00:18:01.223 "params": { 00:18:01.223 "impl_name": "posix" 00:18:01.223 } 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "method": "sock_impl_set_options", 00:18:01.223 "params": { 00:18:01.223 "impl_name": "ssl", 00:18:01.223 "recv_buf_size": 4096, 00:18:01.223 "send_buf_size": 4096, 00:18:01.223 "enable_recv_pipe": true, 00:18:01.223 "enable_quickack": false, 00:18:01.223 "enable_placement_id": 0, 00:18:01.223 "enable_zerocopy_send_server": true, 00:18:01.223 "enable_zerocopy_send_client": false, 00:18:01.223 "zerocopy_threshold": 0, 00:18:01.223 "tls_version": 0, 00:18:01.223 "enable_ktls": false 00:18:01.223 } 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "method": "sock_impl_set_options", 00:18:01.223 "params": { 00:18:01.223 "impl_name": "posix", 00:18:01.223 "recv_buf_size": 2097152, 00:18:01.223 "send_buf_size": 2097152, 00:18:01.223 "enable_recv_pipe": true, 00:18:01.223 "enable_quickack": false, 00:18:01.223 "enable_placement_id": 0, 00:18:01.223 "enable_zerocopy_send_server": true, 00:18:01.223 "enable_zerocopy_send_client": false, 00:18:01.223 "zerocopy_threshold": 0, 00:18:01.223 "tls_version": 0, 00:18:01.223 "enable_ktls": false 00:18:01.223 } 00:18:01.223 } 00:18:01.223 ] 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "subsystem": "vmd", 00:18:01.223 "config": [] 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "subsystem": "accel", 00:18:01.223 "config": [ 00:18:01.223 { 00:18:01.223 "method": "accel_set_options", 00:18:01.223 "params": { 00:18:01.223 "small_cache_size": 128, 00:18:01.223 "large_cache_size": 16, 00:18:01.223 "task_count": 2048, 00:18:01.223 "sequence_count": 2048, 00:18:01.223 "buf_count": 2048 00:18:01.223 } 00:18:01.223 } 00:18:01.223 ] 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "subsystem": "bdev", 00:18:01.223 "config": [ 00:18:01.223 { 00:18:01.223 "method": "bdev_set_options", 00:18:01.223 "params": { 00:18:01.223 "bdev_io_pool_size": 65535, 00:18:01.223 "bdev_io_cache_size": 256, 00:18:01.223 "bdev_auto_examine": true, 00:18:01.223 "iobuf_small_cache_size": 128, 00:18:01.223 "iobuf_large_cache_size": 16 00:18:01.223 } 00:18:01.223 }, 00:18:01.223 { 00:18:01.223 "method": "bdev_raid_set_options", 00:18:01.223 "params": { 00:18:01.223 "process_window_size_kb": 1024, 00:18:01.223 "process_max_bandwidth_mb_sec": 0 00:18:01.223 } 00:18:01.223 }, 00:18:01.223 { 00:18:01.224 "method": "bdev_iscsi_set_options", 00:18:01.224 "params": { 00:18:01.224 "timeout_sec": 30 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "bdev_nvme_set_options", 00:18:01.224 "params": { 00:18:01.224 "action_on_timeout": "none", 00:18:01.224 "timeout_us": 0, 00:18:01.224 "timeout_admin_us": 0, 00:18:01.224 "keep_alive_timeout_ms": 10000, 00:18:01.224 "arbitration_burst": 0, 00:18:01.224 "low_priority_weight": 0, 00:18:01.224 "medium_priority_weight": 0, 00:18:01.224 "high_priority_weight": 0, 00:18:01.224 "nvme_adminq_poll_period_us": 10000, 00:18:01.224 "nvme_ioq_poll_period_us": 0, 00:18:01.224 "io_queue_requests": 0, 00:18:01.224 "delay_cmd_submit": true, 00:18:01.224 "transport_retry_count": 4, 00:18:01.224 "bdev_retry_count": 3, 00:18:01.224 "transport_ack_timeout": 0, 00:18:01.224 "ctrlr_loss_timeout_sec": 0, 00:18:01.224 "reconnect_delay_sec": 0, 00:18:01.224 "fast_io_fail_timeout_sec": 0, 00:18:01.224 "disable_auto_failback": false, 00:18:01.224 "generate_uuids": false, 00:18:01.224 "transport_tos": 0, 00:18:01.224 "nvme_error_stat": false, 00:18:01.224 "rdma_srq_size": 0, 00:18:01.224 "io_path_stat": false, 00:18:01.224 "allow_accel_sequence": false, 00:18:01.224 "rdma_max_cq_size": 0, 00:18:01.224 "rdma_cm_event_timeout_ms": 0, 00:18:01.224 "dhchap_digests": [ 00:18:01.224 "sha256", 00:18:01.224 "sha384", 00:18:01.224 "sha512" 00:18:01.224 ], 00:18:01.224 "dhchap_dhgroups": [ 00:18:01.224 "null", 00:18:01.224 "ffdhe2048", 00:18:01.224 "ffdhe3072", 00:18:01.224 "ffdhe4096", 00:18:01.224 "ffdhe6144", 00:18:01.224 "ffdhe8192" 00:18:01.224 ] 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "bdev_nvme_set_hotplug", 00:18:01.224 "params": { 00:18:01.224 "period_us": 100000, 00:18:01.224 "enable": false 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "bdev_malloc_create", 00:18:01.224 "params": { 00:18:01.224 "name": "malloc0", 00:18:01.224 "num_blocks": 8192, 00:18:01.224 "block_size": 4096, 00:18:01.224 "physical_block_size": 4096, 00:18:01.224 "uuid": "5c156dea-2e98-46a7-bacd-063f0bf5086b", 00:18:01.224 "optimal_io_boundary": 0, 00:18:01.224 "md_size": 0, 00:18:01.224 "dif_type": 0, 00:18:01.224 "dif_is_head_of_md": false, 00:18:01.224 "dif_pi_format": 0 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "bdev_wait_for_examine" 00:18:01.224 } 00:18:01.224 ] 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "subsystem": "nbd", 00:18:01.224 "config": [] 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "subsystem": "scheduler", 00:18:01.224 "config": [ 00:18:01.224 { 00:18:01.224 "method": "framework_set_scheduler", 00:18:01.224 "params": { 00:18:01.224 "name": "static" 00:18:01.224 } 00:18:01.224 } 00:18:01.224 ] 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "subsystem": "nvmf", 00:18:01.224 "config": [ 00:18:01.224 { 00:18:01.224 "method": "nvmf_set_config", 00:18:01.224 "params": { 00:18:01.224 "discovery_filter": "match_any", 00:18:01.224 "admin_cmd_passthru": { 00:18:01.224 "identify_ctrlr": false 00:18:01.224 } 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "nvmf_set_max_subsystems", 00:18:01.224 "params": { 00:18:01.224 "max_subsystems": 1024 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "nvmf_set_crdt", 00:18:01.224 "params": { 00:18:01.224 "crdt1": 0, 00:18:01.224 "crdt2": 0, 00:18:01.224 "crdt3": 0 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "nvmf_create_transport", 00:18:01.224 "params": { 00:18:01.224 "trtype": "TCP", 00:18:01.224 "max_queue_depth": 128, 00:18:01.224 "max_io_qpairs_per_ctrlr": 127, 00:18:01.224 "in_capsule_data_size": 4096, 00:18:01.224 "max_io_size": 131072, 00:18:01.224 "io_unit_size": 131072, 00:18:01.224 "max_aq_depth": 128, 00:18:01.224 "num_shared_buffers": 511, 00:18:01.224 "buf_cache_size": 4294967295, 00:18:01.224 "dif_insert_or_strip": false, 00:18:01.224 "zcopy": false, 00:18:01.224 "c2h_success": false, 00:18:01.224 "sock_priority": 0, 00:18:01.224 "abort_timeout_sec": 1, 00:18:01.224 "ack_timeout": 0, 00:18:01.224 "data_wr_pool_size": 0 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "nvmf_create_subsystem", 00:18:01.224 "params": { 00:18:01.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.224 "allow_any_host": false, 00:18:01.224 "serial_number": "00000000000000000000", 00:18:01.224 "model_number": "SPDK bdev Controller", 00:18:01.224 "max_namespaces": 32, 00:18:01.224 "min_cntlid": 1, 00:18:01.224 "max_cntlid": 65519, 00:18:01.224 "ana_reporting": false 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "nvmf_subsystem_add_host", 00:18:01.224 "params": { 00:18:01.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.224 "host": "nqn.2016-06.io.spdk:host1", 00:18:01.224 "psk": "key0" 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "nvmf_subsystem_add_ns", 00:18:01.224 "params": { 00:18:01.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.224 "namespace": { 00:18:01.224 "nsid": 1, 00:18:01.224 "bdev_name": "malloc0", 00:18:01.224 "nguid": "5C156DEA2E9846A7BACD063F0BF5086B", 00:18:01.224 "uuid": "5c156dea-2e98-46a7-bacd-063f0bf5086b", 00:18:01.224 "no_auto_visible": false 00:18:01.224 } 00:18:01.224 } 00:18:01.224 }, 00:18:01.224 { 00:18:01.224 "method": "nvmf_subsystem_add_listener", 00:18:01.224 "params": { 00:18:01.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.224 "listen_address": { 00:18:01.224 "trtype": "TCP", 00:18:01.224 "adrfam": "IPv4", 00:18:01.224 "traddr": "10.0.0.2", 00:18:01.224 "trsvcid": "4420" 00:18:01.224 }, 00:18:01.224 "secure_channel": false, 00:18:01.224 "sock_impl": "ssl" 00:18:01.224 } 00:18:01.224 } 00:18:01.224 ] 00:18:01.224 } 00:18:01.224 ] 00:18:01.224 }' 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1616299 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1616299 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1616299 ']' 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.224 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.483 [2024-07-24 20:45:56.825657] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:18:01.483 [2024-07-24 20:45:56.825736] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.483 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.483 [2024-07-24 20:45:56.892848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.483 [2024-07-24 20:45:57.012263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.483 [2024-07-24 20:45:57.012325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.483 [2024-07-24 20:45:57.012354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.483 [2024-07-24 20:45:57.012365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.483 [2024-07-24 20:45:57.012375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.483 [2024-07-24 20:45:57.012454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.741 [2024-07-24 20:45:57.260531] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.741 [2024-07-24 20:45:57.306040] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.741 [2024-07-24 20:45:57.306317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.307 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.307 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:02.307 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.307 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.307 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1616448 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1616448 /var/tmp/bdevperf.sock 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1616448 ']' 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.566 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:02.566 "subsystems": [ 00:18:02.566 { 00:18:02.566 "subsystem": "keyring", 00:18:02.566 "config": [ 00:18:02.566 { 00:18:02.566 "method": "keyring_file_add_key", 00:18:02.566 "params": { 00:18:02.566 "name": "key0", 00:18:02.566 "path": "/tmp/tmp.dZ16jTHSF8" 00:18:02.566 } 00:18:02.566 } 00:18:02.566 ] 00:18:02.566 }, 00:18:02.566 { 00:18:02.566 "subsystem": "iobuf", 00:18:02.566 "config": [ 00:18:02.566 { 00:18:02.566 "method": "iobuf_set_options", 00:18:02.566 "params": { 00:18:02.566 "small_pool_count": 8192, 00:18:02.566 "large_pool_count": 1024, 00:18:02.566 "small_bufsize": 8192, 00:18:02.566 "large_bufsize": 135168 00:18:02.566 } 00:18:02.566 } 00:18:02.566 ] 00:18:02.566 }, 00:18:02.566 { 00:18:02.566 "subsystem": "sock", 00:18:02.566 "config": [ 00:18:02.566 { 00:18:02.566 "method": "sock_set_default_impl", 00:18:02.566 "params": { 00:18:02.566 "impl_name": "posix" 00:18:02.566 } 00:18:02.566 }, 00:18:02.566 { 00:18:02.566 "method": "sock_impl_set_options", 00:18:02.566 "params": { 00:18:02.566 "impl_name": "ssl", 00:18:02.566 "recv_buf_size": 4096, 00:18:02.566 "send_buf_size": 4096, 00:18:02.566 "enable_recv_pipe": true, 00:18:02.566 "enable_quickack": false, 00:18:02.566 "enable_placement_id": 0, 00:18:02.566 "enable_zerocopy_send_server": true, 00:18:02.566 "enable_zerocopy_send_client": false, 00:18:02.566 "zerocopy_threshold": 0, 00:18:02.566 "tls_version": 0, 00:18:02.566 "enable_ktls": false 00:18:02.566 } 00:18:02.566 }, 00:18:02.566 { 00:18:02.566 "method": "sock_impl_set_options", 00:18:02.566 "params": { 00:18:02.566 "impl_name": "posix", 00:18:02.566 "recv_buf_size": 2097152, 00:18:02.566 "send_buf_size": 2097152, 00:18:02.566 "enable_recv_pipe": true, 00:18:02.566 "enable_quickack": false, 00:18:02.566 "enable_placement_id": 0, 00:18:02.566 "enable_zerocopy_send_server": true, 00:18:02.566 "enable_zerocopy_send_client": false, 00:18:02.566 "zerocopy_threshold": 0, 00:18:02.566 "tls_version": 0, 00:18:02.566 "enable_ktls": false 00:18:02.566 } 00:18:02.566 } 00:18:02.566 ] 00:18:02.566 }, 00:18:02.566 { 00:18:02.566 "subsystem": "vmd", 00:18:02.566 "config": [] 00:18:02.566 }, 00:18:02.566 { 00:18:02.566 "subsystem": "accel", 00:18:02.566 "config": [ 00:18:02.566 { 00:18:02.566 "method": "accel_set_options", 00:18:02.566 "params": { 00:18:02.566 "small_cache_size": 128, 00:18:02.566 "large_cache_size": 16, 00:18:02.566 "task_count": 2048, 00:18:02.566 "sequence_count": 2048, 00:18:02.566 "buf_count": 2048 00:18:02.566 } 00:18:02.566 } 00:18:02.566 ] 00:18:02.566 }, 00:18:02.566 { 00:18:02.566 "subsystem": "bdev", 00:18:02.566 "config": [ 00:18:02.566 { 00:18:02.566 "method": "bdev_set_options", 00:18:02.566 "params": { 00:18:02.566 "bdev_io_pool_size": 65535, 00:18:02.567 "bdev_io_cache_size": 256, 00:18:02.567 "bdev_auto_examine": true, 00:18:02.567 "iobuf_small_cache_size": 128, 00:18:02.567 "iobuf_large_cache_size": 16 00:18:02.567 } 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "method": "bdev_raid_set_options", 00:18:02.567 "params": { 00:18:02.567 "process_window_size_kb": 1024, 00:18:02.567 "process_max_bandwidth_mb_sec": 0 00:18:02.567 } 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "method": "bdev_iscsi_set_options", 00:18:02.567 "params": { 00:18:02.567 "timeout_sec": 30 00:18:02.567 } 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "method": "bdev_nvme_set_options", 00:18:02.567 "params": { 00:18:02.567 "action_on_timeout": "none", 00:18:02.567 "timeout_us": 0, 00:18:02.567 "timeout_admin_us": 0, 00:18:02.567 "keep_alive_timeout_ms": 10000, 00:18:02.567 "arbitration_burst": 0, 00:18:02.567 "low_priority_weight": 0, 00:18:02.567 "medium_priority_weight": 0, 00:18:02.567 "high_priority_weight": 0, 00:18:02.567 "nvme_adminq_poll_period_us": 10000, 00:18:02.567 "nvme_ioq_poll_period_us": 0, 00:18:02.567 "io_queue_requests": 512, 00:18:02.567 "delay_cmd_submit": true, 00:18:02.567 "transport_retry_count": 4, 00:18:02.567 "bdev_retry_count": 3, 00:18:02.567 "transport_ack_timeout": 0, 00:18:02.567 "ctrlr_loss_timeout_sec": 0, 00:18:02.567 "reconnect_delay_sec": 0, 00:18:02.567 "fast_io_fail_timeout_sec": 0, 00:18:02.567 "disable_auto_failback": false, 00:18:02.567 "generate_uuids": false, 00:18:02.567 "transport_tos": 0, 00:18:02.567 "nvme_error_stat": false, 00:18:02.567 "rdma_srq_size": 0, 00:18:02.567 "io_path_stat": false, 00:18:02.567 "allow_accel_sequence": false, 00:18:02.567 "rdma_max_cq_size": 0, 00:18:02.567 "rdma_cm_event_timeout_ms": 0, 00:18:02.567 "dhchap_digests": [ 00:18:02.567 "sha256", 00:18:02.567 "sha384", 00:18:02.567 "sha512" 00:18:02.567 ], 00:18:02.567 "dhchap_dhgroups": [ 00:18:02.567 "null", 00:18:02.567 "ffdhe2048", 00:18:02.567 "ffdhe3072", 00:18:02.567 "ffdhe4096", 00:18:02.567 "ffdhe6144", 00:18:02.567 "ffdhe8192" 00:18:02.567 ] 00:18:02.567 } 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "method": "bdev_nvme_attach_controller", 00:18:02.567 "params": { 00:18:02.567 "name": "nvme0", 00:18:02.567 "trtype": "TCP", 00:18:02.567 "adrfam": "IPv4", 00:18:02.567 "traddr": "10.0.0.2", 00:18:02.567 "trsvcid": "4420", 00:18:02.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.567 "prchk_reftag": false, 00:18:02.567 "prchk_guard": false, 00:18:02.567 "ctrlr_loss_timeout_sec": 0, 00:18:02.567 "reconnect_delay_sec": 0, 00:18:02.567 "fast_io_fail_timeout_sec": 0, 00:18:02.567 "psk": "key0", 00:18:02.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.567 "hdgst": false, 00:18:02.567 "ddgst": false 00:18:02.567 } 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "method": "bdev_nvme_set_hotplug", 00:18:02.567 "params": { 00:18:02.567 "period_us": 100000, 00:18:02.567 "enable": false 00:18:02.567 } 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "method": "bdev_enable_histogram", 00:18:02.567 "params": { 00:18:02.567 "name": "nvme0n1", 00:18:02.567 "enable": true 00:18:02.567 } 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "method": "bdev_wait_for_examine" 00:18:02.567 } 00:18:02.567 ] 00:18:02.567 }, 00:18:02.567 { 00:18:02.567 "subsystem": "nbd", 00:18:02.567 "config": [] 00:18:02.567 } 00:18:02.567 ] 00:18:02.567 }' 00:18:02.567 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.567 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.567 [2024-07-24 20:45:57.928638] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:18:02.567 [2024-07-24 20:45:57.928720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616448 ] 00:18:02.567 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.567 [2024-07-24 20:45:57.992139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.567 [2024-07-24 20:45:58.107821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.825 [2024-07-24 20:45:58.293463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.391 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.391 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:03.391 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:03.391 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:03.648 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.648 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:03.648 Running I/O for 1 seconds... 00:18:05.022 00:18:05.022 Latency(us) 00:18:05.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.022 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:05.022 Verification LBA range: start 0x0 length 0x2000 00:18:05.022 nvme0n1 : 1.02 3365.12 13.15 0.00 0.00 37598.75 8009.96 35146.71 00:18:05.022 =================================================================================================================== 00:18:05.022 Total : 3365.12 13.15 0.00 0.00 37598.75 8009.96 35146.71 00:18:05.022 0 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:05.022 nvmf_trace.0 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1616448 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1616448 ']' 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1616448 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616448 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616448' 00:18:05.022 killing process with pid 1616448 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1616448 00:18:05.022 Received shutdown signal, test time was about 1.000000 seconds 00:18:05.022 00:18:05.022 Latency(us) 00:18:05.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.022 =================================================================================================================== 00:18:05.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.022 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1616448 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.280 rmmod nvme_tcp 00:18:05.280 rmmod nvme_fabrics 00:18:05.280 rmmod nvme_keyring 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1616299 ']' 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1616299 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1616299 ']' 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1616299 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616299 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616299' 00:18:05.280 killing process with pid 1616299 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1616299 00:18:05.280 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1616299 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.537 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.IBDIr3LO2w /tmp/tmp.NwAMjKaUKt /tmp/tmp.dZ16jTHSF8 00:18:08.067 00:18:08.067 real 1m22.494s 00:18:08.067 user 2m13.933s 00:18:08.067 sys 0m25.372s 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.067 ************************************ 00:18:08.067 END TEST nvmf_tls 00:18:08.067 ************************************ 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.067 ************************************ 00:18:08.067 START TEST nvmf_fips 00:18:08.067 ************************************ 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:08.067 * Looking for test storage... 00:18:08.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:08.067 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:08.068 Error setting digest 00:18:08.068 000226E18E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:08.068 000226E18E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.068 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:10.014 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:10.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:10.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:10.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:10.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:10.015 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.016 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:18:10.274 00:18:10.274 --- 10.0.0.2 ping statistics --- 00:18:10.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.274 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:10.274 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:10.274 00:18:10.274 --- 10.0.0.1 ping statistics --- 00:18:10.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.274 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1618803 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1618803 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1618803 ']' 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.275 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:10.275 [2024-07-24 20:46:05.668638] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:18:10.275 [2024-07-24 20:46:05.668727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.275 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.275 [2024-07-24 20:46:05.732398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.533 [2024-07-24 20:46:05.843047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.533 [2024-07-24 20:46:05.843095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.533 [2024-07-24 20:46:05.843110] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.533 [2024-07-24 20:46:05.843122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.533 [2024-07-24 20:46:05.843132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.533 [2024-07-24 20:46:05.843159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:11.099 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.358 [2024-07-24 20:46:06.852216] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.358 [2024-07-24 20:46:06.868233] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.358 [2024-07-24 20:46:06.868536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.358 [2024-07-24 20:46:06.900875] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:11.358 malloc0 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1618964 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1618964 /var/tmp/bdevperf.sock 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1618964 ']' 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.358 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:11.616 [2024-07-24 20:46:06.990132] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:18:11.616 [2024-07-24 20:46:06.990221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618964 ] 00:18:11.616 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.616 [2024-07-24 20:46:07.045935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.616 [2024-07-24 20:46:07.150869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.875 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.875 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:11.875 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:12.133 [2024-07-24 20:46:07.537139] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.133 [2024-07-24 20:46:07.537261] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:12.133 TLSTESTn1 00:18:12.133 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.391 Running I/O for 10 seconds... 00:18:22.354 00:18:22.354 Latency(us) 00:18:22.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.354 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.354 Verification LBA range: start 0x0 length 0x2000 00:18:22.354 TLSTESTn1 : 10.02 3566.80 13.93 0.00 0.00 35821.34 6092.42 40001.23 00:18:22.354 =================================================================================================================== 00:18:22.354 Total : 3566.80 13.93 0.00 0.00 35821.34 6092.42 40001.23 00:18:22.354 0 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:22.354 nvmf_trace.0 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1618964 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1618964 ']' 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1618964 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1618964 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1618964' 00:18:22.354 killing process with pid 1618964 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1618964 00:18:22.354 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.354 00:18:22.354 Latency(us) 00:18:22.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.354 =================================================================================================================== 00:18:22.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.354 [2024-07-24 20:46:17.913864] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:22.354 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1618964 00:18:22.612 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:22.612 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.612 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:22.612 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.612 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:22.612 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.612 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.612 rmmod nvme_tcp 00:18:22.870 rmmod nvme_fabrics 00:18:22.870 rmmod nvme_keyring 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1618803 ']' 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1618803 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1618803 ']' 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1618803 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1618803 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1618803' 00:18:22.870 killing process with pid 1618803 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1618803 00:18:22.870 [2024-07-24 20:46:18.235078] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:22.870 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1618803 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.129 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:25.030 00:18:25.030 real 0m17.439s 00:18:25.030 user 0m22.750s 00:18:25.030 sys 0m5.370s 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.030 ************************************ 00:18:25.030 END TEST nvmf_fips 00:18:25.030 ************************************ 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.030 20:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:26.927 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:26.928 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:26.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:26.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:26.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.928 ************************************ 00:18:26.928 START TEST nvmf_perf_adq 00:18:26.928 ************************************ 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:26.928 * Looking for test storage... 00:18:26.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.928 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.929 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:29.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:29.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:29.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:29.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.457 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:29.458 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:29.458 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:29.458 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:29.715 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:31.613 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:36.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.910 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:36.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:36.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:36.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:36.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:18:36.911 00:18:36.911 --- 10.0.0.2 ping statistics --- 00:18:36.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.911 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:18:36.911 00:18:36.911 --- 10.0.0.1 ping statistics --- 00:18:36.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.911 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1624713 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1624713 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1624713 ']' 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.911 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:36.911 [2024-07-24 20:46:32.387998] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:18:36.911 [2024-07-24 20:46:32.388088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.911 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.911 [2024-07-24 20:46:32.460842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.170 [2024-07-24 20:46:32.582362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.170 [2024-07-24 20:46:32.582427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.170 [2024-07-24 20:46:32.582444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.170 [2024-07-24 20:46:32.582457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.170 [2024-07-24 20:46:32.582469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.170 [2024-07-24 20:46:32.582558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.170 [2024-07-24 20:46:32.582613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.170 [2024-07-24 20:46:32.582654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.170 [2024-07-24 20:46:32.582658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.170 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.428 [2024-07-24 20:46:32.820644] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.428 Malloc1 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.428 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.429 [2024-07-24 20:46:32.873783] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.429 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.429 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1624859 00:18:37.429 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:37.429 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:37.429 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.333 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:39.333 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.333 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:39.333 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.333 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:39.333 "tick_rate": 2700000000, 00:18:39.333 "poll_groups": [ 00:18:39.333 { 00:18:39.333 "name": "nvmf_tgt_poll_group_000", 00:18:39.333 "admin_qpairs": 1, 00:18:39.333 "io_qpairs": 1, 00:18:39.333 "current_admin_qpairs": 1, 00:18:39.333 "current_io_qpairs": 1, 00:18:39.333 "pending_bdev_io": 0, 00:18:39.333 "completed_nvme_io": 20882, 00:18:39.333 "transports": [ 00:18:39.333 { 00:18:39.333 "trtype": "TCP" 00:18:39.333 } 00:18:39.333 ] 00:18:39.333 }, 00:18:39.333 { 00:18:39.333 "name": "nvmf_tgt_poll_group_001", 00:18:39.333 "admin_qpairs": 0, 00:18:39.333 "io_qpairs": 1, 00:18:39.333 "current_admin_qpairs": 0, 00:18:39.333 "current_io_qpairs": 1, 00:18:39.333 "pending_bdev_io": 0, 00:18:39.333 "completed_nvme_io": 19177, 00:18:39.333 "transports": [ 00:18:39.333 { 00:18:39.333 "trtype": "TCP" 00:18:39.333 } 00:18:39.333 ] 00:18:39.333 }, 00:18:39.333 { 00:18:39.333 "name": "nvmf_tgt_poll_group_002", 00:18:39.333 "admin_qpairs": 0, 00:18:39.333 "io_qpairs": 1, 00:18:39.333 "current_admin_qpairs": 0, 00:18:39.333 "current_io_qpairs": 1, 00:18:39.333 "pending_bdev_io": 0, 00:18:39.333 "completed_nvme_io": 21084, 00:18:39.333 "transports": [ 00:18:39.333 { 00:18:39.333 "trtype": "TCP" 00:18:39.333 } 00:18:39.333 ] 00:18:39.333 }, 00:18:39.333 { 00:18:39.333 "name": "nvmf_tgt_poll_group_003", 00:18:39.333 "admin_qpairs": 0, 00:18:39.333 "io_qpairs": 1, 00:18:39.333 "current_admin_qpairs": 0, 00:18:39.333 "current_io_qpairs": 1, 00:18:39.333 "pending_bdev_io": 0, 00:18:39.333 "completed_nvme_io": 20657, 00:18:39.333 "transports": [ 00:18:39.333 { 00:18:39.333 "trtype": "TCP" 00:18:39.333 } 00:18:39.333 ] 00:18:39.333 } 00:18:39.333 ] 00:18:39.333 }' 00:18:39.333 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:39.333 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:39.591 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:39.591 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:39.591 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1624859 00:18:47.695 Initializing NVMe Controllers 00:18:47.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:47.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:47.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:47.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:47.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:47.695 Initialization complete. Launching workers. 00:18:47.695 ======================================================== 00:18:47.695 Latency(us) 00:18:47.695 Device Information : IOPS MiB/s Average min max 00:18:47.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11164.10 43.61 5734.25 1913.53 8429.50 00:18:47.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10072.01 39.34 6354.79 1634.93 10985.44 00:18:47.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10889.20 42.54 5879.47 2896.98 9008.84 00:18:47.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11021.10 43.05 5809.13 2885.01 7862.36 00:18:47.695 ======================================================== 00:18:47.695 Total : 43146.42 168.54 5934.88 1634.93 10985.44 00:18:47.695 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.695 rmmod nvme_tcp 00:18:47.695 rmmod nvme_fabrics 00:18:47.695 rmmod nvme_keyring 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1624713 ']' 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1624713 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1624713 ']' 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1624713 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1624713 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1624713' 00:18:47.695 killing process with pid 1624713 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1624713 00:18:47.695 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1624713 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.952 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.481 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.481 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:18:50.481 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:50.738 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:52.636 20:46:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:57.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:57.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:57.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:57.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:57.905 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:57.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:18:57.906 00:18:57.906 --- 10.0.0.2 ping statistics --- 00:18:57.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.906 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:18:57.906 00:18:57.906 --- 10.0.0.1 ping statistics --- 00:18:57.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.906 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:57.906 net.core.busy_poll = 1 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:57.906 net.core.busy_read = 1 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1627479 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1627479 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1627479 ']' 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.906 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:57.906 [2024-07-24 20:46:53.457187] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:18:57.906 [2024-07-24 20:46:53.457300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.164 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.164 [2024-07-24 20:46:53.525853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.164 [2024-07-24 20:46:53.642478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.164 [2024-07-24 20:46:53.642539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.164 [2024-07-24 20:46:53.642566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.164 [2024-07-24 20:46:53.642579] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.164 [2024-07-24 20:46:53.642591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.164 [2024-07-24 20:46:53.642692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.164 [2024-07-24 20:46:53.642763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.164 [2024-07-24 20:46:53.642857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.164 [2024-07-24 20:46:53.642860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 [2024-07-24 20:46:54.592704] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 Malloc1 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.094 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.095 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.095 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.095 [2024-07-24 20:46:54.646006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.095 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.095 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1627637 00:18:59.095 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:18:59.095 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:59.352 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:01.248 "tick_rate": 2700000000, 00:19:01.248 "poll_groups": [ 00:19:01.248 { 00:19:01.248 "name": "nvmf_tgt_poll_group_000", 00:19:01.248 "admin_qpairs": 1, 00:19:01.248 "io_qpairs": 3, 00:19:01.248 "current_admin_qpairs": 1, 00:19:01.248 "current_io_qpairs": 3, 00:19:01.248 "pending_bdev_io": 0, 00:19:01.248 "completed_nvme_io": 27788, 00:19:01.248 "transports": [ 00:19:01.248 { 00:19:01.248 "trtype": "TCP" 00:19:01.248 } 00:19:01.248 ] 00:19:01.248 }, 00:19:01.248 { 00:19:01.248 "name": "nvmf_tgt_poll_group_001", 00:19:01.248 "admin_qpairs": 0, 00:19:01.248 "io_qpairs": 1, 00:19:01.248 "current_admin_qpairs": 0, 00:19:01.248 "current_io_qpairs": 1, 00:19:01.248 "pending_bdev_io": 0, 00:19:01.248 "completed_nvme_io": 26719, 00:19:01.248 "transports": [ 00:19:01.248 { 00:19:01.248 "trtype": "TCP" 00:19:01.248 } 00:19:01.248 ] 00:19:01.248 }, 00:19:01.248 { 00:19:01.248 "name": "nvmf_tgt_poll_group_002", 00:19:01.248 "admin_qpairs": 0, 00:19:01.248 "io_qpairs": 0, 00:19:01.248 "current_admin_qpairs": 0, 00:19:01.248 "current_io_qpairs": 0, 00:19:01.248 "pending_bdev_io": 0, 00:19:01.248 "completed_nvme_io": 0, 00:19:01.248 "transports": [ 00:19:01.248 { 00:19:01.248 "trtype": "TCP" 00:19:01.248 } 00:19:01.248 ] 00:19:01.248 }, 00:19:01.248 { 00:19:01.248 "name": "nvmf_tgt_poll_group_003", 00:19:01.248 "admin_qpairs": 0, 00:19:01.248 "io_qpairs": 0, 00:19:01.248 "current_admin_qpairs": 0, 00:19:01.248 "current_io_qpairs": 0, 00:19:01.248 "pending_bdev_io": 0, 00:19:01.248 "completed_nvme_io": 0, 00:19:01.248 "transports": [ 00:19:01.248 { 00:19:01.248 "trtype": "TCP" 00:19:01.248 } 00:19:01.248 ] 00:19:01.248 } 00:19:01.248 ] 00:19:01.248 }' 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:01.248 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1627637 00:19:09.401 Initializing NVMe Controllers 00:19:09.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:09.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:09.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:09.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:09.401 Initialization complete. Launching workers. 00:19:09.401 ======================================================== 00:19:09.401 Latency(us) 00:19:09.401 Device Information : IOPS MiB/s Average min max 00:19:09.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5098.30 19.92 12558.63 1794.96 58611.31 00:19:09.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4259.20 16.64 15060.20 2199.49 60204.90 00:19:09.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4975.70 19.44 12865.45 2000.87 61080.82 00:19:09.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13665.20 53.38 4683.89 1407.35 7008.95 00:19:09.401 ======================================================== 00:19:09.401 Total : 27998.40 109.37 9150.27 1407.35 61080.82 00:19:09.401 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.401 rmmod nvme_tcp 00:19:09.401 rmmod nvme_fabrics 00:19:09.401 rmmod nvme_keyring 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1627479 ']' 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1627479 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1627479 ']' 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1627479 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1627479 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1627479' 00:19:09.401 killing process with pid 1627479 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1627479 00:19:09.401 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1627479 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.660 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:12.943 00:19:12.943 real 0m45.799s 00:19:12.943 user 2m39.177s 00:19:12.943 sys 0m11.017s 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.943 ************************************ 00:19:12.943 END TEST nvmf_perf_adq 00:19:12.943 ************************************ 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.943 ************************************ 00:19:12.943 START TEST nvmf_shutdown 00:19:12.943 ************************************ 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:12.943 * Looking for test storage... 00:19:12.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:12.943 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:12.944 ************************************ 00:19:12.944 START TEST nvmf_shutdown_tc1 00:19:12.944 ************************************ 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.944 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.846 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:14.846 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:15.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.106 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:15.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:15.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:15.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:19:15.107 00:19:15.107 --- 10.0.0.2 ping statistics --- 00:19:15.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.107 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:15.107 00:19:15.107 --- 10.0.0.1 ping statistics --- 00:19:15.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.107 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1630933 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1630933 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1630933 ']' 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.107 20:47:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:15.107 [2024-07-24 20:47:10.634181] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:15.107 [2024-07-24 20:47:10.634272] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.107 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.366 [2024-07-24 20:47:10.703384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.366 [2024-07-24 20:47:10.824313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.366 [2024-07-24 20:47:10.824376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.366 [2024-07-24 20:47:10.824392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.366 [2024-07-24 20:47:10.824413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.366 [2024-07-24 20:47:10.824425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.366 [2024-07-24 20:47:10.824518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.366 [2024-07-24 20:47:10.824569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.366 [2024-07-24 20:47:10.824631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:15.366 [2024-07-24 20:47:10.824633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.314 [2024-07-24 20:47:11.604513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.314 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.315 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.315 Malloc1 00:19:16.315 [2024-07-24 20:47:11.693819] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.315 Malloc2 00:19:16.315 Malloc3 00:19:16.315 Malloc4 00:19:16.315 Malloc5 00:19:16.572 Malloc6 00:19:16.572 Malloc7 00:19:16.572 Malloc8 00:19:16.572 Malloc9 00:19:16.573 Malloc10 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1631118 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1631118 /var/tmp/bdevperf.sock 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1631118 ']' 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.832 { 00:19:16.832 "params": { 00:19:16.832 "name": "Nvme$subsystem", 00:19:16.832 "trtype": "$TEST_TRANSPORT", 00:19:16.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.832 "adrfam": "ipv4", 00:19:16.832 "trsvcid": "$NVMF_PORT", 00:19:16.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.832 "hdgst": ${hdgst:-false}, 00:19:16.832 "ddgst": ${ddgst:-false} 00:19:16.832 }, 00:19:16.832 "method": "bdev_nvme_attach_controller" 00:19:16.832 } 00:19:16.832 EOF 00:19:16.832 )") 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.832 { 00:19:16.832 "params": { 00:19:16.832 "name": "Nvme$subsystem", 00:19:16.832 "trtype": "$TEST_TRANSPORT", 00:19:16.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.832 "adrfam": "ipv4", 00:19:16.832 "trsvcid": "$NVMF_PORT", 00:19:16.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.832 "hdgst": ${hdgst:-false}, 00:19:16.832 "ddgst": ${ddgst:-false} 00:19:16.832 }, 00:19:16.832 "method": "bdev_nvme_attach_controller" 00:19:16.832 } 00:19:16.832 EOF 00:19:16.832 )") 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.832 { 00:19:16.832 "params": { 00:19:16.832 "name": "Nvme$subsystem", 00:19:16.832 "trtype": "$TEST_TRANSPORT", 00:19:16.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.832 "adrfam": "ipv4", 00:19:16.832 "trsvcid": "$NVMF_PORT", 00:19:16.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.832 "hdgst": ${hdgst:-false}, 00:19:16.832 "ddgst": ${ddgst:-false} 00:19:16.832 }, 00:19:16.832 "method": "bdev_nvme_attach_controller" 00:19:16.832 } 00:19:16.832 EOF 00:19:16.832 )") 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.832 { 00:19:16.832 "params": { 00:19:16.832 "name": "Nvme$subsystem", 00:19:16.832 "trtype": "$TEST_TRANSPORT", 00:19:16.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.832 "adrfam": "ipv4", 00:19:16.832 "trsvcid": "$NVMF_PORT", 00:19:16.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.832 "hdgst": ${hdgst:-false}, 00:19:16.832 "ddgst": ${ddgst:-false} 00:19:16.832 }, 00:19:16.832 "method": "bdev_nvme_attach_controller" 00:19:16.832 } 00:19:16.832 EOF 00:19:16.832 )") 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.832 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.832 { 00:19:16.832 "params": { 00:19:16.832 "name": "Nvme$subsystem", 00:19:16.832 "trtype": "$TEST_TRANSPORT", 00:19:16.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.832 "adrfam": "ipv4", 00:19:16.832 "trsvcid": "$NVMF_PORT", 00:19:16.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.832 "hdgst": ${hdgst:-false}, 00:19:16.833 "ddgst": ${ddgst:-false} 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 } 00:19:16.833 EOF 00:19:16.833 )") 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.833 { 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme$subsystem", 00:19:16.833 "trtype": "$TEST_TRANSPORT", 00:19:16.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "$NVMF_PORT", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.833 "hdgst": ${hdgst:-false}, 00:19:16.833 "ddgst": ${ddgst:-false} 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 } 00:19:16.833 EOF 00:19:16.833 )") 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.833 { 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme$subsystem", 00:19:16.833 "trtype": "$TEST_TRANSPORT", 00:19:16.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "$NVMF_PORT", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.833 "hdgst": ${hdgst:-false}, 00:19:16.833 "ddgst": ${ddgst:-false} 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 } 00:19:16.833 EOF 00:19:16.833 )") 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.833 { 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme$subsystem", 00:19:16.833 "trtype": "$TEST_TRANSPORT", 00:19:16.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "$NVMF_PORT", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.833 "hdgst": ${hdgst:-false}, 00:19:16.833 "ddgst": ${ddgst:-false} 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 } 00:19:16.833 EOF 00:19:16.833 )") 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.833 { 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme$subsystem", 00:19:16.833 "trtype": "$TEST_TRANSPORT", 00:19:16.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "$NVMF_PORT", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.833 "hdgst": ${hdgst:-false}, 00:19:16.833 "ddgst": ${ddgst:-false} 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 } 00:19:16.833 EOF 00:19:16.833 )") 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.833 { 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme$subsystem", 00:19:16.833 "trtype": "$TEST_TRANSPORT", 00:19:16.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "$NVMF_PORT", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.833 "hdgst": ${hdgst:-false}, 00:19:16.833 "ddgst": ${ddgst:-false} 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 } 00:19:16.833 EOF 00:19:16.833 )") 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:16.833 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme1", 00:19:16.833 "trtype": "tcp", 00:19:16.833 "traddr": "10.0.0.2", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "4420", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.833 "hdgst": false, 00:19:16.833 "ddgst": false 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 },{ 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme2", 00:19:16.833 "trtype": "tcp", 00:19:16.833 "traddr": "10.0.0.2", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "4420", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:16.833 "hdgst": false, 00:19:16.833 "ddgst": false 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 },{ 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme3", 00:19:16.833 "trtype": "tcp", 00:19:16.833 "traddr": "10.0.0.2", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "4420", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:16.833 "hdgst": false, 00:19:16.833 "ddgst": false 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 },{ 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme4", 00:19:16.833 "trtype": "tcp", 00:19:16.833 "traddr": "10.0.0.2", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "4420", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:16.833 "hdgst": false, 00:19:16.833 "ddgst": false 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 },{ 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme5", 00:19:16.833 "trtype": "tcp", 00:19:16.833 "traddr": "10.0.0.2", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "4420", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:16.833 "hdgst": false, 00:19:16.833 "ddgst": false 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 },{ 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme6", 00:19:16.833 "trtype": "tcp", 00:19:16.833 "traddr": "10.0.0.2", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "4420", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:16.833 "hdgst": false, 00:19:16.833 "ddgst": false 00:19:16.833 }, 00:19:16.833 "method": "bdev_nvme_attach_controller" 00:19:16.833 },{ 00:19:16.833 "params": { 00:19:16.833 "name": "Nvme7", 00:19:16.833 "trtype": "tcp", 00:19:16.833 "traddr": "10.0.0.2", 00:19:16.833 "adrfam": "ipv4", 00:19:16.833 "trsvcid": "4420", 00:19:16.833 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:16.833 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:16.833 "hdgst": false, 00:19:16.833 "ddgst": false 00:19:16.834 }, 00:19:16.834 "method": "bdev_nvme_attach_controller" 00:19:16.834 },{ 00:19:16.834 "params": { 00:19:16.834 "name": "Nvme8", 00:19:16.834 "trtype": "tcp", 00:19:16.834 "traddr": "10.0.0.2", 00:19:16.834 "adrfam": "ipv4", 00:19:16.834 "trsvcid": "4420", 00:19:16.834 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:16.834 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:16.834 "hdgst": false, 00:19:16.834 "ddgst": false 00:19:16.834 }, 00:19:16.834 "method": "bdev_nvme_attach_controller" 00:19:16.834 },{ 00:19:16.834 "params": { 00:19:16.834 "name": "Nvme9", 00:19:16.834 "trtype": "tcp", 00:19:16.834 "traddr": "10.0.0.2", 00:19:16.834 "adrfam": "ipv4", 00:19:16.834 "trsvcid": "4420", 00:19:16.834 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:16.834 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:16.834 "hdgst": false, 00:19:16.834 "ddgst": false 00:19:16.834 }, 00:19:16.834 "method": "bdev_nvme_attach_controller" 00:19:16.834 },{ 00:19:16.834 "params": { 00:19:16.834 "name": "Nvme10", 00:19:16.834 "trtype": "tcp", 00:19:16.834 "traddr": "10.0.0.2", 00:19:16.834 "adrfam": "ipv4", 00:19:16.834 "trsvcid": "4420", 00:19:16.834 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:16.834 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:16.834 "hdgst": false, 00:19:16.834 "ddgst": false 00:19:16.834 }, 00:19:16.834 "method": "bdev_nvme_attach_controller" 00:19:16.834 }' 00:19:16.834 [2024-07-24 20:47:12.215768] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:16.834 [2024-07-24 20:47:12.215841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:16.834 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.834 [2024-07-24 20:47:12.279646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.834 [2024-07-24 20:47:12.391800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1631118 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:18.729 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:19.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1631118 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1630933 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.661 { 00:19:19.661 "params": { 00:19:19.661 "name": "Nvme$subsystem", 00:19:19.661 "trtype": "$TEST_TRANSPORT", 00:19:19.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.661 "adrfam": "ipv4", 00:19:19.661 "trsvcid": "$NVMF_PORT", 00:19:19.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.661 "hdgst": ${hdgst:-false}, 00:19:19.661 "ddgst": ${ddgst:-false} 00:19:19.661 }, 00:19:19.661 "method": "bdev_nvme_attach_controller" 00:19:19.661 } 00:19:19.661 EOF 00:19:19.661 )") 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.661 { 00:19:19.661 "params": { 00:19:19.661 "name": "Nvme$subsystem", 00:19:19.661 "trtype": "$TEST_TRANSPORT", 00:19:19.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.661 "adrfam": "ipv4", 00:19:19.661 "trsvcid": "$NVMF_PORT", 00:19:19.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.661 "hdgst": ${hdgst:-false}, 00:19:19.661 "ddgst": ${ddgst:-false} 00:19:19.661 }, 00:19:19.661 "method": "bdev_nvme_attach_controller" 00:19:19.661 } 00:19:19.661 EOF 00:19:19.661 )") 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.661 { 00:19:19.661 "params": { 00:19:19.661 "name": "Nvme$subsystem", 00:19:19.661 "trtype": "$TEST_TRANSPORT", 00:19:19.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.661 "adrfam": "ipv4", 00:19:19.661 "trsvcid": "$NVMF_PORT", 00:19:19.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.661 "hdgst": ${hdgst:-false}, 00:19:19.661 "ddgst": ${ddgst:-false} 00:19:19.661 }, 00:19:19.661 "method": "bdev_nvme_attach_controller" 00:19:19.661 } 00:19:19.661 EOF 00:19:19.661 )") 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.661 { 00:19:19.661 "params": { 00:19:19.661 "name": "Nvme$subsystem", 00:19:19.661 "trtype": "$TEST_TRANSPORT", 00:19:19.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.661 "adrfam": "ipv4", 00:19:19.661 "trsvcid": "$NVMF_PORT", 00:19:19.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.661 "hdgst": ${hdgst:-false}, 00:19:19.661 "ddgst": ${ddgst:-false} 00:19:19.661 }, 00:19:19.661 "method": "bdev_nvme_attach_controller" 00:19:19.661 } 00:19:19.661 EOF 00:19:19.661 )") 00:19:19.661 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.919 { 00:19:19.919 "params": { 00:19:19.919 "name": "Nvme$subsystem", 00:19:19.919 "trtype": "$TEST_TRANSPORT", 00:19:19.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.919 "adrfam": "ipv4", 00:19:19.919 "trsvcid": "$NVMF_PORT", 00:19:19.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.919 "hdgst": ${hdgst:-false}, 00:19:19.919 "ddgst": ${ddgst:-false} 00:19:19.919 }, 00:19:19.919 "method": "bdev_nvme_attach_controller" 00:19:19.919 } 00:19:19.919 EOF 00:19:19.919 )") 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.919 { 00:19:19.919 "params": { 00:19:19.919 "name": "Nvme$subsystem", 00:19:19.919 "trtype": "$TEST_TRANSPORT", 00:19:19.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.919 "adrfam": "ipv4", 00:19:19.919 "trsvcid": "$NVMF_PORT", 00:19:19.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.919 "hdgst": ${hdgst:-false}, 00:19:19.919 "ddgst": ${ddgst:-false} 00:19:19.919 }, 00:19:19.919 "method": "bdev_nvme_attach_controller" 00:19:19.919 } 00:19:19.919 EOF 00:19:19.919 )") 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.919 { 00:19:19.919 "params": { 00:19:19.919 "name": "Nvme$subsystem", 00:19:19.919 "trtype": "$TEST_TRANSPORT", 00:19:19.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.919 "adrfam": "ipv4", 00:19:19.919 "trsvcid": "$NVMF_PORT", 00:19:19.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.919 "hdgst": ${hdgst:-false}, 00:19:19.919 "ddgst": ${ddgst:-false} 00:19:19.919 }, 00:19:19.919 "method": "bdev_nvme_attach_controller" 00:19:19.919 } 00:19:19.919 EOF 00:19:19.919 )") 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.919 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.919 { 00:19:19.919 "params": { 00:19:19.919 "name": "Nvme$subsystem", 00:19:19.919 "trtype": "$TEST_TRANSPORT", 00:19:19.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "$NVMF_PORT", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.920 "hdgst": ${hdgst:-false}, 00:19:19.920 "ddgst": ${ddgst:-false} 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 } 00:19:19.920 EOF 00:19:19.920 )") 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.920 { 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme$subsystem", 00:19:19.920 "trtype": "$TEST_TRANSPORT", 00:19:19.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "$NVMF_PORT", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.920 "hdgst": ${hdgst:-false}, 00:19:19.920 "ddgst": ${ddgst:-false} 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 } 00:19:19.920 EOF 00:19:19.920 )") 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:19.920 { 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme$subsystem", 00:19:19.920 "trtype": "$TEST_TRANSPORT", 00:19:19.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "$NVMF_PORT", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.920 "hdgst": ${hdgst:-false}, 00:19:19.920 "ddgst": ${ddgst:-false} 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 } 00:19:19.920 EOF 00:19:19.920 )") 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:19.920 20:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme1", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme2", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme3", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme4", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme5", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme6", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme7", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme8", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme9", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 },{ 00:19:19.920 "params": { 00:19:19.920 "name": "Nvme10", 00:19:19.920 "trtype": "tcp", 00:19:19.920 "traddr": "10.0.0.2", 00:19:19.920 "adrfam": "ipv4", 00:19:19.920 "trsvcid": "4420", 00:19:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:19.920 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:19.920 "hdgst": false, 00:19:19.920 "ddgst": false 00:19:19.920 }, 00:19:19.920 "method": "bdev_nvme_attach_controller" 00:19:19.920 }' 00:19:19.920 [2024-07-24 20:47:15.260129] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:19.920 [2024-07-24 20:47:15.260222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631540 ] 00:19:19.920 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.920 [2024-07-24 20:47:15.324935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.920 [2024-07-24 20:47:15.439374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.818 Running I/O for 1 seconds... 00:19:22.751 00:19:22.751 Latency(us) 00:19:22.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.751 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.751 Verification LBA range: start 0x0 length 0x400 00:19:22.751 Nvme1n1 : 1.02 187.88 11.74 0.00 0.00 337123.81 24563.86 256318.58 00:19:22.751 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.751 Verification LBA range: start 0x0 length 0x400 00:19:22.751 Nvme2n1 : 1.10 233.53 14.60 0.00 0.00 266426.22 17282.09 259425.47 00:19:22.751 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.751 Verification LBA range: start 0x0 length 0x400 00:19:22.751 Nvme3n1 : 1.09 246.20 15.39 0.00 0.00 244228.30 11311.03 243891.01 00:19:22.751 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.752 Verification LBA range: start 0x0 length 0x400 00:19:22.752 Nvme4n1 : 1.18 271.75 16.98 0.00 0.00 220604.95 19806.44 243891.01 00:19:22.752 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.752 Verification LBA range: start 0x0 length 0x400 00:19:22.752 Nvme5n1 : 1.17 217.89 13.62 0.00 0.00 270562.61 13981.01 301368.51 00:19:22.752 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.752 Verification LBA range: start 0x0 length 0x400 00:19:22.752 Nvme6n1 : 1.14 225.06 14.07 0.00 0.00 258458.55 19320.98 254765.13 00:19:22.752 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.752 Verification LBA range: start 0x0 length 0x400 00:19:22.752 Nvme7n1 : 1.19 269.84 16.87 0.00 0.00 213022.49 14272.28 254765.13 00:19:22.752 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.752 Verification LBA range: start 0x0 length 0x400 00:19:22.752 Nvme8n1 : 1.14 228.93 14.31 0.00 0.00 244093.70 4320.52 233016.89 00:19:22.752 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.752 Verification LBA range: start 0x0 length 0x400 00:19:22.752 Nvme9n1 : 1.15 223.04 13.94 0.00 0.00 248025.32 20291.89 259425.47 00:19:22.752 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.752 Verification LBA range: start 0x0 length 0x400 00:19:22.752 Nvme10n1 : 1.19 268.00 16.75 0.00 0.00 203926.79 9320.68 264085.81 00:19:22.752 =================================================================================================================== 00:19:22.752 Total : 2372.12 148.26 0.00 0.00 245853.66 4320.52 301368.51 00:19:23.009 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:23.009 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:23.009 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.010 rmmod nvme_tcp 00:19:23.010 rmmod nvme_fabrics 00:19:23.010 rmmod nvme_keyring 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1630933 ']' 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1630933 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1630933 ']' 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1630933 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1630933 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1630933' 00:19:23.010 killing process with pid 1630933 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1630933 00:19:23.010 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1630933 00:19:23.575 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.575 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.575 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.575 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.575 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.575 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.575 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.576 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:26.122 00:19:26.122 real 0m12.751s 00:19:26.122 user 0m37.799s 00:19:26.122 sys 0m3.340s 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:26.122 ************************************ 00:19:26.122 END TEST nvmf_shutdown_tc1 00:19:26.122 ************************************ 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:26.122 ************************************ 00:19:26.122 START TEST nvmf_shutdown_tc2 00:19:26.122 ************************************ 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:26.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:26.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:26.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.122 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:26.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:26.123 00:19:26.123 --- 10.0.0.2 ping statistics --- 00:19:26.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.123 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:19:26.123 00:19:26.123 --- 10.0.0.1 ping statistics --- 00:19:26.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.123 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1632371 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1632371 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1632371 ']' 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.123 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.123 [2024-07-24 20:47:21.415821] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:26.123 [2024-07-24 20:47:21.415898] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.123 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.123 [2024-07-24 20:47:21.480566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.123 [2024-07-24 20:47:21.595646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.123 [2024-07-24 20:47:21.595702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.123 [2024-07-24 20:47:21.595730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.123 [2024-07-24 20:47:21.595742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.123 [2024-07-24 20:47:21.595752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.123 [2024-07-24 20:47:21.595881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.123 [2024-07-24 20:47:21.595947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.123 [2024-07-24 20:47:21.596013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.123 [2024-07-24 20:47:21.596015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:27.056 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.057 [2024-07-24 20:47:22.437998] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.057 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.057 Malloc1 00:19:27.057 [2024-07-24 20:47:22.522186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.057 Malloc2 00:19:27.057 Malloc3 00:19:27.314 Malloc4 00:19:27.314 Malloc5 00:19:27.314 Malloc6 00:19:27.314 Malloc7 00:19:27.314 Malloc8 00:19:27.573 Malloc9 00:19:27.573 Malloc10 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1632619 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1632619 /var/tmp/bdevperf.sock 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1632619 ']' 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.573 { 00:19:27.573 "params": { 00:19:27.573 "name": "Nvme$subsystem", 00:19:27.573 "trtype": "$TEST_TRANSPORT", 00:19:27.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.573 "adrfam": "ipv4", 00:19:27.573 "trsvcid": "$NVMF_PORT", 00:19:27.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.573 "hdgst": ${hdgst:-false}, 00:19:27.573 "ddgst": ${ddgst:-false} 00:19:27.573 }, 00:19:27.573 "method": "bdev_nvme_attach_controller" 00:19:27.573 } 00:19:27.573 EOF 00:19:27.573 )") 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.573 { 00:19:27.573 "params": { 00:19:27.573 "name": "Nvme$subsystem", 00:19:27.573 "trtype": "$TEST_TRANSPORT", 00:19:27.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.573 "adrfam": "ipv4", 00:19:27.573 "trsvcid": "$NVMF_PORT", 00:19:27.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.573 "hdgst": ${hdgst:-false}, 00:19:27.573 "ddgst": ${ddgst:-false} 00:19:27.573 }, 00:19:27.573 "method": "bdev_nvme_attach_controller" 00:19:27.573 } 00:19:27.573 EOF 00:19:27.573 )") 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.573 { 00:19:27.573 "params": { 00:19:27.573 "name": "Nvme$subsystem", 00:19:27.573 "trtype": "$TEST_TRANSPORT", 00:19:27.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.573 "adrfam": "ipv4", 00:19:27.573 "trsvcid": "$NVMF_PORT", 00:19:27.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.573 "hdgst": ${hdgst:-false}, 00:19:27.573 "ddgst": ${ddgst:-false} 00:19:27.573 }, 00:19:27.573 "method": "bdev_nvme_attach_controller" 00:19:27.573 } 00:19:27.573 EOF 00:19:27.573 )") 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.573 { 00:19:27.573 "params": { 00:19:27.573 "name": "Nvme$subsystem", 00:19:27.573 "trtype": "$TEST_TRANSPORT", 00:19:27.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.573 "adrfam": "ipv4", 00:19:27.573 "trsvcid": "$NVMF_PORT", 00:19:27.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.573 "hdgst": ${hdgst:-false}, 00:19:27.573 "ddgst": ${ddgst:-false} 00:19:27.573 }, 00:19:27.573 "method": "bdev_nvme_attach_controller" 00:19:27.573 } 00:19:27.573 EOF 00:19:27.573 )") 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.573 { 00:19:27.573 "params": { 00:19:27.573 "name": "Nvme$subsystem", 00:19:27.573 "trtype": "$TEST_TRANSPORT", 00:19:27.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.573 "adrfam": "ipv4", 00:19:27.573 "trsvcid": "$NVMF_PORT", 00:19:27.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.573 "hdgst": ${hdgst:-false}, 00:19:27.573 "ddgst": ${ddgst:-false} 00:19:27.573 }, 00:19:27.573 "method": "bdev_nvme_attach_controller" 00:19:27.573 } 00:19:27.573 EOF 00:19:27.573 )") 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.573 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.574 { 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme$subsystem", 00:19:27.574 "trtype": "$TEST_TRANSPORT", 00:19:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "$NVMF_PORT", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.574 "hdgst": ${hdgst:-false}, 00:19:27.574 "ddgst": ${ddgst:-false} 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 } 00:19:27.574 EOF 00:19:27.574 )") 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.574 { 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme$subsystem", 00:19:27.574 "trtype": "$TEST_TRANSPORT", 00:19:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "$NVMF_PORT", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.574 "hdgst": ${hdgst:-false}, 00:19:27.574 "ddgst": ${ddgst:-false} 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 } 00:19:27.574 EOF 00:19:27.574 )") 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.574 { 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme$subsystem", 00:19:27.574 "trtype": "$TEST_TRANSPORT", 00:19:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "$NVMF_PORT", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.574 "hdgst": ${hdgst:-false}, 00:19:27.574 "ddgst": ${ddgst:-false} 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 } 00:19:27.574 EOF 00:19:27.574 )") 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.574 { 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme$subsystem", 00:19:27.574 "trtype": "$TEST_TRANSPORT", 00:19:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "$NVMF_PORT", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.574 "hdgst": ${hdgst:-false}, 00:19:27.574 "ddgst": ${ddgst:-false} 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 } 00:19:27.574 EOF 00:19:27.574 )") 00:19:27.574 20:47:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.574 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.574 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.574 { 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme$subsystem", 00:19:27.574 "trtype": "$TEST_TRANSPORT", 00:19:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "$NVMF_PORT", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.574 "hdgst": ${hdgst:-false}, 00:19:27.574 "ddgst": ${ddgst:-false} 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 } 00:19:27.574 EOF 00:19:27.574 )") 00:19:27.574 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.574 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:27.574 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:27.574 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme1", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme2", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme3", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme4", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme5", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme6", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme7", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme8", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:27.574 "hdgst": false, 00:19:27.574 "ddgst": false 00:19:27.574 }, 00:19:27.574 "method": "bdev_nvme_attach_controller" 00:19:27.574 },{ 00:19:27.574 "params": { 00:19:27.574 "name": "Nvme9", 00:19:27.574 "trtype": "tcp", 00:19:27.574 "traddr": "10.0.0.2", 00:19:27.574 "adrfam": "ipv4", 00:19:27.574 "trsvcid": "4420", 00:19:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:27.574 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:27.575 "hdgst": false, 00:19:27.575 "ddgst": false 00:19:27.575 }, 00:19:27.575 "method": "bdev_nvme_attach_controller" 00:19:27.575 },{ 00:19:27.575 "params": { 00:19:27.575 "name": "Nvme10", 00:19:27.575 "trtype": "tcp", 00:19:27.575 "traddr": "10.0.0.2", 00:19:27.575 "adrfam": "ipv4", 00:19:27.575 "trsvcid": "4420", 00:19:27.575 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:27.575 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:27.575 "hdgst": false, 00:19:27.575 "ddgst": false 00:19:27.575 }, 00:19:27.575 "method": "bdev_nvme_attach_controller" 00:19:27.575 }' 00:19:27.575 [2024-07-24 20:47:23.018101] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:27.575 [2024-07-24 20:47:23.018180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632619 ] 00:19:27.575 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.575 [2024-07-24 20:47:23.082902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.832 [2024-07-24 20:47:23.192993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.201 Running I/O for 10 seconds... 00:19:29.459 20:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.459 20:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:29.459 20:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:29.459 20:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.459 20:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.459 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.716 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.716 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:29.716 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:29.716 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:29.973 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1632619 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1632619 ']' 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1632619 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1632619 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1632619' 00:19:30.231 killing process with pid 1632619 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1632619 00:19:30.231 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1632619 00:19:30.231 Received shutdown signal, test time was about 0.998117 seconds 00:19:30.231 00:19:30.231 Latency(us) 00:19:30.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme1n1 : 0.97 198.20 12.39 0.00 0.00 319285.48 22719.15 276513.37 00:19:30.231 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme2n1 : 0.99 259.35 16.21 0.00 0.00 239352.79 21456.97 251658.24 00:19:30.231 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme3n1 : 1.00 257.28 16.08 0.00 0.00 234809.46 8252.68 257872.02 00:19:30.231 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme4n1 : 0.99 258.45 16.15 0.00 0.00 230777.55 22233.69 259425.47 00:19:30.231 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme5n1 : 0.95 214.00 13.37 0.00 0.00 269005.53 6213.78 259425.47 00:19:30.231 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme6n1 : 1.00 256.70 16.04 0.00 0.00 223217.97 33010.73 240784.12 00:19:30.231 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme7n1 : 0.97 197.55 12.35 0.00 0.00 283274.56 23981.32 256318.58 00:19:30.231 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme8n1 : 0.98 265.71 16.61 0.00 0.00 205416.50 1177.22 231463.44 00:19:30.231 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme9n1 : 0.98 195.94 12.25 0.00 0.00 273791.81 21748.24 296708.17 00:19:30.231 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:30.231 Verification LBA range: start 0x0 length 0x400 00:19:30.231 Nvme10n1 : 0.97 198.73 12.42 0.00 0.00 263489.61 19320.98 260978.92 00:19:30.231 =================================================================================================================== 00:19:30.231 Total : 2301.91 143.87 0.00 0.00 250309.61 1177.22 296708.17 00:19:30.488 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1632371 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.861 rmmod nvme_tcp 00:19:31.861 rmmod nvme_fabrics 00:19:31.861 rmmod nvme_keyring 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1632371 ']' 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1632371 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1632371 ']' 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1632371 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1632371 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1632371' 00:19:31.861 killing process with pid 1632371 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1632371 00:19:31.861 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1632371 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.426 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.370 00:19:34.370 real 0m8.565s 00:19:34.370 user 0m26.778s 00:19:34.370 sys 0m1.637s 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:34.370 ************************************ 00:19:34.370 END TEST nvmf_shutdown_tc2 00:19:34.370 ************************************ 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:34.370 ************************************ 00:19:34.370 START TEST nvmf_shutdown_tc3 00:19:34.370 ************************************ 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.370 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:34.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:34.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:34.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:34.371 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:34.371 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:34.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:19:34.372 00:19:34.372 --- 10.0.0.2 ping statistics --- 00:19:34.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.372 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:19:34.372 00:19:34.372 --- 10.0.0.1 ping statistics --- 00:19:34.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.372 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.372 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1633535 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1633535 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1633535 ']' 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.629 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.629 [2024-07-24 20:47:30.018419] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:34.629 [2024-07-24 20:47:30.018516] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.629 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.629 [2024-07-24 20:47:30.086706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.886 [2024-07-24 20:47:30.203460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.886 [2024-07-24 20:47:30.203510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.886 [2024-07-24 20:47:30.203526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.886 [2024-07-24 20:47:30.203548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.886 [2024-07-24 20:47:30.203559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.886 [2024-07-24 20:47:30.203640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.886 [2024-07-24 20:47:30.203753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.886 [2024-07-24 20:47:30.203819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:34.886 [2024-07-24 20:47:30.203821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.450 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.450 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:35.450 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.450 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.450 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.708 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.708 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:35.708 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.708 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.708 [2024-07-24 20:47:31.023939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.708 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.709 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.709 Malloc1 00:19:35.709 [2024-07-24 20:47:31.103307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.709 Malloc2 00:19:35.709 Malloc3 00:19:35.709 Malloc4 00:19:35.709 Malloc5 00:19:35.967 Malloc6 00:19:35.967 Malloc7 00:19:35.967 Malloc8 00:19:35.967 Malloc9 00:19:35.967 Malloc10 00:19:35.967 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.967 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:35.967 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.967 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1633725 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1633725 /var/tmp/bdevperf.sock 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1633725 ']' 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.228 { 00:19:36.228 "params": { 00:19:36.228 "name": "Nvme$subsystem", 00:19:36.228 "trtype": "$TEST_TRANSPORT", 00:19:36.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.228 "adrfam": "ipv4", 00:19:36.228 "trsvcid": "$NVMF_PORT", 00:19:36.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.228 "hdgst": ${hdgst:-false}, 00:19:36.228 "ddgst": ${ddgst:-false} 00:19:36.228 }, 00:19:36.228 "method": "bdev_nvme_attach_controller" 00:19:36.228 } 00:19:36.228 EOF 00:19:36.228 )") 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.228 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.228 { 00:19:36.228 "params": { 00:19:36.228 "name": "Nvme$subsystem", 00:19:36.228 "trtype": "$TEST_TRANSPORT", 00:19:36.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.228 "adrfam": "ipv4", 00:19:36.228 "trsvcid": "$NVMF_PORT", 00:19:36.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.228 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:36.229 { 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme$subsystem", 00:19:36.229 "trtype": "$TEST_TRANSPORT", 00:19:36.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "$NVMF_PORT", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.229 "hdgst": ${hdgst:-false}, 00:19:36.229 "ddgst": ${ddgst:-false} 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 } 00:19:36.229 EOF 00:19:36.229 )") 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:36.229 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme1", 00:19:36.229 "trtype": "tcp", 00:19:36.229 "traddr": "10.0.0.2", 00:19:36.229 "adrfam": "ipv4", 00:19:36.229 "trsvcid": "4420", 00:19:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.229 "hdgst": false, 00:19:36.229 "ddgst": false 00:19:36.229 }, 00:19:36.229 "method": "bdev_nvme_attach_controller" 00:19:36.229 },{ 00:19:36.229 "params": { 00:19:36.229 "name": "Nvme2", 00:19:36.229 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme3", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme4", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme5", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme6", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme7", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme8", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme9", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 },{ 00:19:36.230 "params": { 00:19:36.230 "name": "Nvme10", 00:19:36.230 "trtype": "tcp", 00:19:36.230 "traddr": "10.0.0.2", 00:19:36.230 "adrfam": "ipv4", 00:19:36.230 "trsvcid": "4420", 00:19:36.230 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:36.230 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:36.230 "hdgst": false, 00:19:36.230 "ddgst": false 00:19:36.230 }, 00:19:36.230 "method": "bdev_nvme_attach_controller" 00:19:36.230 }' 00:19:36.230 [2024-07-24 20:47:31.594131] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:36.230 [2024-07-24 20:47:31.594212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633725 ] 00:19:36.230 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.230 [2024-07-24 20:47:31.659500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.230 [2024-07-24 20:47:31.770631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.126 Running I/O for 10 seconds... 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:38.126 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.384 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:38.384 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:38.384 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:38.642 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:38.642 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:38.642 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:38.642 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:38.642 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.642 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:38.642 20:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.642 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:38.642 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:38.642 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=147 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1633535 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1633535 ']' 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1633535 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1633535 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1633535' 00:19:38.916 killing process with pid 1633535 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1633535 00:19:38.916 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1633535 00:19:38.916 [2024-07-24 20:47:34.338402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.916 [2024-07-24 20:47:34.338790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.338993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.339209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257ed60 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.340455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591410 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.340487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591410 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.340501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591410 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.340513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591410 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.340524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2591410 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.917 [2024-07-24 20:47:34.341924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.341936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.341947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.341958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.341970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.341981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.341993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.342257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f220 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.343995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.918 [2024-07-24 20:47:34.344515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.344629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257f6e0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.345784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257fbc0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.345819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257fbc0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.345834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257fbc0 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.346995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.919 [2024-07-24 20:47:34.347265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580080 is same with the state(5) to be set 00:19:38.920 [2024-07-24 20:47:34.347417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6db50 is same with the state(5) to be set 00:19:38.920 [2024-07-24 20:47:34.347621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58360 is same with the state(5) to be set 00:19:38.920 [2024-07-24 20:47:34.347811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.347923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6c3a0 is same with the state(5) to be set 00:19:38.920 [2024-07-24 20:47:34.347967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.347987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49830 is same with the state(5) to be set 00:19:38.920 [2024-07-24 20:47:34.348130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff8c0 is same with the state(5) to be set 00:19:38.920 [2024-07-24 20:47:34.348307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.920 [2024-07-24 20:47:34.348408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6b850 is same with the state(5) to be set 00:19:38.920 [2024-07-24 20:47:34.348696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.920 [2024-07-24 20:47:34.348957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.920 [2024-07-24 20:47:34.348971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.348986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.348999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.921 [2024-07-24 20:47:34.349693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.921 [2024-07-24 20:47:34.349708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.349981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.349996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-24 20:47:34.350171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with [2024-07-24 20:47:34.350184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:38.922 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:47:34.350257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.922 [2024-07-24 20:47:34.350308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.922 [2024-07-24 20:47:34.350321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.922 [2024-07-24 20:47:34.350333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with [2024-07-24 20:47:34.350333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:19:38.922 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with [2024-07-24 20:47:34.350355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:38.923 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.350369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.350393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:47:34.350417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.350454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:47:34.350478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.350516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.350545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.350570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with [2024-07-24 20:47:34.350582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1the state(5) to be set 00:19:38.923 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.350598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.350602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-24 20:47:34.350615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:47:34.350628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with [2024-07-24 20:47:34.350711] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf420c0 was disconnected and frethe state(5) to be set 00:19:38.923 ed. reset controller. 00:19:38.923 [2024-07-24 20:47:34.350729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.350874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580540 is same with the state(5) to be set 00:19:38.923 [2024-07-24 20:47:34.352986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.353013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.353034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.353050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.353066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.923 [2024-07-24 20:47:34.353080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.923 [2024-07-24 20:47:34.353096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with [2024-07-24 20:47:34.353667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12the state(5) to be set 00:19:38.924 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12[2024-07-24 20:47:34.353729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with [2024-07-24 20:47:34.353743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:38.924 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.924 [2024-07-24 20:47:34.353830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.924 [2024-07-24 20:47:34.353839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.924 [2024-07-24 20:47:34.353842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with [2024-07-24 20:47:34.353854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12the state(5) to be set 00:19:38.925 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.353868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.353880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.353892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.353904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with [2024-07-24 20:47:34.353916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12the state(5) to be set 00:19:38.925 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.353929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.353941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.353952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.353970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.353983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.353990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12[2024-07-24 20:47:34.354037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with [2024-07-24 20:47:34.354050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:38.925 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-07-24 20:47:34.354100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:47:34.354113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-07-24 20:47:34.354191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with [2024-07-24 20:47:34.354205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:38.925 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2580a00 is same with the state(5) to be set 00:19:38.925 [2024-07-24 20:47:34.354261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.925 [2024-07-24 20:47:34.354362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.925 [2024-07-24 20:47:34.354377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.354894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.926 [2024-07-24 20:47:34.354908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.926 [2024-07-24 20:47:34.355070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.926 [2024-07-24 20:47:34.355333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355447] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf3a5b0 was disconnected and freed. reset controller. 00:19:38.927 [2024-07-24 20:47:34.355461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.355845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aabd0 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.356981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.927 [2024-07-24 20:47:34.357211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.357509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab090 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.358184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:38.928 [2024-07-24 20:47:34.358216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:38.928 [2024-07-24 20:47:34.358253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfff8c0 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.358277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6c3a0 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.358319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9470 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.358484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe75d00 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.358622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6db50 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.358651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe58360 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.358680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49830 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.358714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6b850 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.358761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94b610 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.358928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.358975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.358988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.359001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.359020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.928 [2024-07-24 20:47:34.359033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.359045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100abd0 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.360755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.928 [2024-07-24 20:47:34.360786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6c3a0 with addr=10.0.0.2, port=4420 00:19:38.928 [2024-07-24 20:47:34.360802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6c3a0 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.360931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.928 [2024-07-24 20:47:34.360956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfff8c0 with addr=10.0.0.2, port=4420 00:19:38.928 [2024-07-24 20:47:34.360970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff8c0 is same with the state(5) to be set 00:19:38.928 [2024-07-24 20:47:34.361028] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:38.928 [2024-07-24 20:47:34.361094] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:38.928 [2024-07-24 20:47:34.361159] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:38.928 [2024-07-24 20:47:34.361257] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:38.928 [2024-07-24 20:47:34.361321] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:38.928 [2024-07-24 20:47:34.361526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6c3a0 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.361553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfff8c0 (9): Bad file descriptor 00:19:38.928 [2024-07-24 20:47:34.361637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.928 [2024-07-24 20:47:34.361659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.928 [2024-07-24 20:47:34.361682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.928 [2024-07-24 20:47:34.361698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.361984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.361999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.929 [2024-07-24 20:47:34.362694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.929 [2024-07-24 20:47:34.362709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.362974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.362988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf430d0 is same with the state(5) to be set 00:19:38.930 [2024-07-24 20:47:34.363672] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf430d0 was disconnected and freed. reset controller. 00:19:38.930 [2024-07-24 20:47:34.363741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.363969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.363984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.364000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.364018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.930 [2024-07-24 20:47:34.364034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.930 [2024-07-24 20:47:34.364048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.364980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.364993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.365009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.365022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.365043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.365057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.931 [2024-07-24 20:47:34.365073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.931 [2024-07-24 20:47:34.365086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.365798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.365812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45000 is same with the state(5) to be set 00:19:38.932 [2024-07-24 20:47:34.365884] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe45000 was disconnected and freed. reset controller. 00:19:38.932 [2024-07-24 20:47:34.365960] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:38.932 [2024-07-24 20:47:34.366080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:38.932 [2024-07-24 20:47:34.366101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:38.932 [2024-07-24 20:47:34.366117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:38.932 [2024-07-24 20:47:34.366137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:38.932 [2024-07-24 20:47:34.366151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:38.932 [2024-07-24 20:47:34.366163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:38.932 [2024-07-24 20:47:34.368593] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:38.932 [2024-07-24 20:47:34.368627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.932 [2024-07-24 20:47:34.368645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.932 [2024-07-24 20:47:34.368659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:38.932 [2024-07-24 20:47:34.368679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:38.932 [2024-07-24 20:47:34.368705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94b610 (9): Bad file descriptor 00:19:38.932 [2024-07-24 20:47:34.368758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9470 (9): Bad file descriptor 00:19:38.932 [2024-07-24 20:47:34.368793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe75d00 (9): Bad file descriptor 00:19:38.932 [2024-07-24 20:47:34.368850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100abd0 (9): Bad file descriptor 00:19:38.932 [2024-07-24 20:47:34.369120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.932 [2024-07-24 20:47:34.369150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6b850 with addr=10.0.0.2, port=4420 00:19:38.932 [2024-07-24 20:47:34.369168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6b850 is same with the state(5) to be set 00:19:38.932 [2024-07-24 20:47:34.369257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.369279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.369301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.369317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.369333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.369347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.369362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.369376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.369392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.369406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.932 [2024-07-24 20:47:34.369421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.932 [2024-07-24 20:47:34.369435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.369968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.369982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.933 [2024-07-24 20:47:34.370622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.933 [2024-07-24 20:47:34.370638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.370975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.370990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.371004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.371033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.371049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.371063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.371079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.371092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.371107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.371125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.371140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.371154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.371169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.371182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.371197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe97a0 is same with the state(5) to be set 00:19:38.934 [2024-07-24 20:47:34.372466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.372972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.372987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.373001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.934 [2024-07-24 20:47:34.373017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.934 [2024-07-24 20:47:34.373030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.373973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.373987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.374002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.374015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.374031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.374044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.374059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.374072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.374088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.374101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.374116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.374130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.935 [2024-07-24 20:47:34.374145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.935 [2024-07-24 20:47:34.374158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.374412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.374426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeabd0 is same with the state(5) to be set 00:19:38.936 [2024-07-24 20:47:34.375674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.375972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.375987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.376000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.936 [2024-07-24 20:47:34.376016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.936 [2024-07-24 20:47:34.376030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.937 [2024-07-24 20:47:34.376802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.937 [2024-07-24 20:47:34.376815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.376830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.376844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.376860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.376874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.376889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.376903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.376918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.376932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.376947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.376961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.376976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.376990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.377403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.377420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.383567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.383611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.383629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.383644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.383661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.383675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.383691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.383706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.383722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.383736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.383751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.938 [2024-07-24 20:47:34.383765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.938 [2024-07-24 20:47:34.383780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec0c0 is same with the state(5) to be set 00:19:38.938 [2024-07-24 20:47:34.385656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.938 [2024-07-24 20:47:34.385690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:38.938 [2024-07-24 20:47:34.385708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:38.939 [2024-07-24 20:47:34.386013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.939 [2024-07-24 20:47:34.386046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94b610 with addr=10.0.0.2, port=4420 00:19:38.939 [2024-07-24 20:47:34.386063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94b610 is same with the state(5) to be set 00:19:38.939 [2024-07-24 20:47:34.386089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6b850 (9): Bad file descriptor 00:19:38.939 [2024-07-24 20:47:34.386172] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.939 [2024-07-24 20:47:34.386198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94b610 (9): Bad file descriptor 00:19:38.939 [2024-07-24 20:47:34.386439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.939 [2024-07-24 20:47:34.386468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe49830 with addr=10.0.0.2, port=4420 00:19:38.939 [2024-07-24 20:47:34.386485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49830 is same with the state(5) to be set 00:19:38.939 [2024-07-24 20:47:34.386610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.939 [2024-07-24 20:47:34.386636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6db50 with addr=10.0.0.2, port=4420 00:19:38.939 [2024-07-24 20:47:34.386665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6db50 is same with the state(5) to be set 00:19:38.939 [2024-07-24 20:47:34.386784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.939 [2024-07-24 20:47:34.386809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe58360 with addr=10.0.0.2, port=4420 00:19:38.939 [2024-07-24 20:47:34.386824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58360 is same with the state(5) to be set 00:19:38.939 [2024-07-24 20:47:34.386840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:38.939 [2024-07-24 20:47:34.386853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:38.939 [2024-07-24 20:47:34.386868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:38.939 [2024-07-24 20:47:34.387744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.387979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.387994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.388023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.388058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.388086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.388115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.388144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.388173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.939 [2024-07-24 20:47:34.388201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.939 [2024-07-24 20:47:34.388214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.388979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.940 [2024-07-24 20:47:34.388993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.940 [2024-07-24 20:47:34.389008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.389651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.389665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf445c0 is same with the state(5) to be set 00:19:38.941 [2024-07-24 20:47:34.390945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.390968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.390988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.941 [2024-07-24 20:47:34.391003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.941 [2024-07-24 20:47:34.391019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.942 [2024-07-24 20:47:34.391836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.942 [2024-07-24 20:47:34.391850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.391866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.391879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.391895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.391909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.391925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.391939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.391958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.391972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.391988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.943 [2024-07-24 20:47:34.392438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.943 [2024-07-24 20:47:34.392452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.392867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.392881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf454d0 is same with the state(5) to be set 00:19:38.944 [2024-07-24 20:47:34.394111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.944 [2024-07-24 20:47:34.394654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.944 [2024-07-24 20:47:34.394670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.394979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.394994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.945 [2024-07-24 20:47:34.395684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.945 [2024-07-24 20:47:34.395698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.395984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.395998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.396013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.946 [2024-07-24 20:47:34.396027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.946 [2024-07-24 20:47:34.396041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468b0 is same with the state(5) to be set 00:19:38.946 [2024-07-24 20:47:34.398007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:38.946 [2024-07-24 20:47:34.398039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:38.946 [2024-07-24 20:47:34.398059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.946 [2024-07-24 20:47:34.398075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:38.946 [2024-07-24 20:47:34.398091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:38.946 task offset: 24576 on job bdev=Nvme4n1 fails 00:19:38.946 00:19:38.946 Latency(us) 00:19:38.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.946 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme1n1 ended in about 0.92 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme1n1 : 0.92 208.43 13.03 69.48 0.00 227704.04 19320.98 250104.79 00:19:38.946 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme2n1 ended in about 0.92 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme2n1 : 0.92 138.48 8.65 69.24 0.00 298656.81 24466.77 259425.47 00:19:38.946 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme3n1 ended in about 0.93 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme3n1 : 0.93 205.63 12.85 68.54 0.00 221687.09 16019.91 233016.89 00:19:38.946 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme4n1 ended in about 0.91 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme4n1 : 0.91 212.02 13.25 70.67 0.00 210110.86 9854.67 256318.58 00:19:38.946 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme5n1 ended in about 0.92 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme5n1 : 0.92 146.26 9.14 69.85 0.00 269197.52 13981.01 250104.79 00:19:38.946 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme6n1 ended in about 0.94 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme6n1 : 0.94 136.23 8.51 68.12 0.00 279499.92 26020.22 282727.16 00:19:38.946 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme7n1 ended in about 0.92 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme7n1 : 0.92 209.31 13.08 69.77 0.00 199559.21 14854.83 251658.24 00:19:38.946 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme8n1 ended in about 0.94 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme8n1 : 0.94 135.77 8.49 67.89 0.00 268604.05 22233.69 285834.05 00:19:38.946 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme9n1 ended in about 0.95 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme9n1 : 0.95 140.61 8.79 67.66 0.00 257103.93 20583.16 242337.56 00:19:38.946 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.946 Job: Nvme10n1 ended in about 0.91 seconds with error 00:19:38.946 Verification LBA range: start 0x0 length 0x400 00:19:38.946 Nvme10n1 : 0.91 141.12 8.82 70.56 0.00 245001.10 10000.31 309135.74 00:19:38.946 =================================================================================================================== 00:19:38.946 Total : 1673.85 104.62 691.78 0.00 243936.23 9854.67 309135.74 00:19:38.946 [2024-07-24 20:47:34.424709] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:38.947 [2024-07-24 20:47:34.424863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49830 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.424895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6db50 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.424915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe58360 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.424932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.424946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.424962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:38.947 [2024-07-24 20:47:34.425031] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.425067] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.425087] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.425118] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.425138] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.425284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:38.947 [2024-07-24 20:47:34.425328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.425641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.947 [2024-07-24 20:47:34.425677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfff8c0 with addr=10.0.0.2, port=4420 00:19:38.947 [2024-07-24 20:47:34.425696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff8c0 is same with the state(5) to be set 00:19:38.947 [2024-07-24 20:47:34.425807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.947 [2024-07-24 20:47:34.425833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6c3a0 with addr=10.0.0.2, port=4420 00:19:38.947 [2024-07-24 20:47:34.425850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6c3a0 is same with the state(5) to be set 00:19:38.947 [2024-07-24 20:47:34.425953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.947 [2024-07-24 20:47:34.425979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe75d00 with addr=10.0.0.2, port=4420 00:19:38.947 [2024-07-24 20:47:34.425994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe75d00 is same with the state(5) to be set 00:19:38.947 [2024-07-24 20:47:34.426092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.947 [2024-07-24 20:47:34.426118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100abd0 with addr=10.0.0.2, port=4420 00:19:38.947 [2024-07-24 20:47:34.426133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100abd0 is same with the state(5) to be set 00:19:38.947 [2024-07-24 20:47:34.426147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.426160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.426172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.947 [2024-07-24 20:47:34.426192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.426207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.426219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:38.947 [2024-07-24 20:47:34.426238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.426261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.426274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:38.947 [2024-07-24 20:47:34.426310] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.426341] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.426360] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.426377] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.947 [2024-07-24 20:47:34.427248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:38.947 [2024-07-24 20:47:34.427304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.427323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.427336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.427453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.947 [2024-07-24 20:47:34.427480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff9470 with addr=10.0.0.2, port=4420 00:19:38.947 [2024-07-24 20:47:34.427497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9470 is same with the state(5) to be set 00:19:38.947 [2024-07-24 20:47:34.427516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfff8c0 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.427540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6c3a0 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.427558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe75d00 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.427575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100abd0 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.427912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:38.947 [2024-07-24 20:47:34.428066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.947 [2024-07-24 20:47:34.428094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6b850 with addr=10.0.0.2, port=4420 00:19:38.947 [2024-07-24 20:47:34.428111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6b850 is same with the state(5) to be set 00:19:38.947 [2024-07-24 20:47:34.428129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9470 (9): Bad file descriptor 00:19:38.947 [2024-07-24 20:47:34.428147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.428160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.428174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:38.947 [2024-07-24 20:47:34.428191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.428205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.428217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:38.947 [2024-07-24 20:47:34.428252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.428268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.428280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:38.947 [2024-07-24 20:47:34.428296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:38.947 [2024-07-24 20:47:34.428309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:38.947 [2024-07-24 20:47:34.428321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:38.947 [2024-07-24 20:47:34.428383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.428402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.428414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.428425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.947 [2024-07-24 20:47:34.428559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.947 [2024-07-24 20:47:34.428584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94b610 with addr=10.0.0.2, port=4420 00:19:38.948 [2024-07-24 20:47:34.428600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94b610 is same with the state(5) to be set 00:19:38.948 [2024-07-24 20:47:34.428618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6b850 (9): Bad file descriptor 00:19:38.948 [2024-07-24 20:47:34.428635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:38.948 [2024-07-24 20:47:34.428648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:38.948 [2024-07-24 20:47:34.428660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:38.948 [2024-07-24 20:47:34.428703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.948 [2024-07-24 20:47:34.428725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94b610 (9): Bad file descriptor 00:19:38.948 [2024-07-24 20:47:34.428742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:38.948 [2024-07-24 20:47:34.428755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:38.948 [2024-07-24 20:47:34.428768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:38.948 [2024-07-24 20:47:34.428803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.948 [2024-07-24 20:47:34.428822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:38.948 [2024-07-24 20:47:34.428834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:38.948 [2024-07-24 20:47:34.428847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:38.948 [2024-07-24 20:47:34.428885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:39.543 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:39.543 20:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1633725 00:19:40.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1633725) - No such process 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.479 rmmod nvme_tcp 00:19:40.479 rmmod nvme_fabrics 00:19:40.479 rmmod nvme_keyring 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.479 20:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.009 00:19:43.009 real 0m8.220s 00:19:43.009 user 0m21.280s 00:19:43.009 sys 0m1.533s 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:43.009 ************************************ 00:19:43.009 END TEST nvmf_shutdown_tc3 00:19:43.009 ************************************ 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:43.009 00:19:43.009 real 0m29.767s 00:19:43.009 user 1m25.951s 00:19:43.009 sys 0m6.662s 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:43.009 ************************************ 00:19:43.009 END TEST nvmf_shutdown 00:19:43.009 ************************************ 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:19:43.009 00:19:43.009 real 10m36.103s 00:19:43.009 user 25m25.447s 00:19:43.009 sys 2m31.411s 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.009 20:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.009 ************************************ 00:19:43.009 END TEST nvmf_target_extra 00:19:43.009 ************************************ 00:19:43.009 20:47:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:43.009 20:47:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:43.009 20:47:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.009 20:47:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.009 ************************************ 00:19:43.009 START TEST nvmf_host 00:19:43.009 ************************************ 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:43.009 * Looking for test storage... 00:19:43.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.009 ************************************ 00:19:43.009 START TEST nvmf_multicontroller 00:19:43.009 ************************************ 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:43.009 * Looking for test storage... 00:19:43.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.009 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.010 20:47:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:44.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.910 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:44.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:44.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:44.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:44.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:19:44.911 00:19:44.911 --- 10.0.0.2 ping statistics --- 00:19:44.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.911 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:19:44.911 00:19:44.911 --- 10.0.0.1 ping statistics --- 00:19:44.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.911 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1636278 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1636278 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1636278 ']' 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.911 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:44.911 [2024-07-24 20:47:40.456804] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:44.911 [2024-07-24 20:47:40.456900] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.169 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.169 [2024-07-24 20:47:40.525930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:45.169 [2024-07-24 20:47:40.637210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.169 [2024-07-24 20:47:40.637278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.169 [2024-07-24 20:47:40.637293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.169 [2024-07-24 20:47:40.637304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.169 [2024-07-24 20:47:40.637314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.169 [2024-07-24 20:47:40.639278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.169 [2024-07-24 20:47:40.639361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.169 [2024-07-24 20:47:40.639364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 [2024-07-24 20:47:40.781004] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 Malloc0 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 [2024-07-24 20:47:40.841397] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 [2024-07-24 20:47:40.849235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 Malloc1 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.428 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1636420 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1636420 /var/tmp/bdevperf.sock 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1636420 ']' 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.429 20:47:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.686 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.686 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:45.686 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:45.686 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.686 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.944 NVMe0n1 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.944 1 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.944 request: 00:19:45.944 { 00:19:45.944 "name": "NVMe0", 00:19:45.944 "trtype": "tcp", 00:19:45.944 "traddr": "10.0.0.2", 00:19:45.944 "adrfam": "ipv4", 00:19:45.944 "trsvcid": "4420", 00:19:45.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.944 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:45.944 "hostaddr": "10.0.0.2", 00:19:45.944 "hostsvcid": "60000", 00:19:45.944 "prchk_reftag": false, 00:19:45.944 "prchk_guard": false, 00:19:45.944 "hdgst": false, 00:19:45.944 "ddgst": false, 00:19:45.944 "method": "bdev_nvme_attach_controller", 00:19:45.944 "req_id": 1 00:19:45.944 } 00:19:45.944 Got JSON-RPC error response 00:19:45.944 response: 00:19:45.944 { 00:19:45.944 "code": -114, 00:19:45.944 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:45.944 } 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.944 request: 00:19:45.944 { 00:19:45.944 "name": "NVMe0", 00:19:45.944 "trtype": "tcp", 00:19:45.944 "traddr": "10.0.0.2", 00:19:45.944 "adrfam": "ipv4", 00:19:45.944 "trsvcid": "4420", 00:19:45.944 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:45.944 "hostaddr": "10.0.0.2", 00:19:45.944 "hostsvcid": "60000", 00:19:45.944 "prchk_reftag": false, 00:19:45.944 "prchk_guard": false, 00:19:45.944 "hdgst": false, 00:19:45.944 "ddgst": false, 00:19:45.944 "method": "bdev_nvme_attach_controller", 00:19:45.944 "req_id": 1 00:19:45.944 } 00:19:45.944 Got JSON-RPC error response 00:19:45.944 response: 00:19:45.944 { 00:19:45.944 "code": -114, 00:19:45.944 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:45.944 } 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.944 request: 00:19:45.944 { 00:19:45.944 "name": "NVMe0", 00:19:45.944 "trtype": "tcp", 00:19:45.944 "traddr": "10.0.0.2", 00:19:45.944 "adrfam": "ipv4", 00:19:45.944 "trsvcid": "4420", 00:19:45.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.944 "hostaddr": "10.0.0.2", 00:19:45.944 "hostsvcid": "60000", 00:19:45.944 "prchk_reftag": false, 00:19:45.944 "prchk_guard": false, 00:19:45.944 "hdgst": false, 00:19:45.944 "ddgst": false, 00:19:45.944 "multipath": "disable", 00:19:45.944 "method": "bdev_nvme_attach_controller", 00:19:45.944 "req_id": 1 00:19:45.944 } 00:19:45.944 Got JSON-RPC error response 00:19:45.944 response: 00:19:45.944 { 00:19:45.944 "code": -114, 00:19:45.944 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:45.944 } 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.944 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.945 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:46.202 request: 00:19:46.202 { 00:19:46.202 "name": "NVMe0", 00:19:46.202 "trtype": "tcp", 00:19:46.202 "traddr": "10.0.0.2", 00:19:46.202 "adrfam": "ipv4", 00:19:46.202 "trsvcid": "4420", 00:19:46.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.202 "hostaddr": "10.0.0.2", 00:19:46.202 "hostsvcid": "60000", 00:19:46.202 "prchk_reftag": false, 00:19:46.202 "prchk_guard": false, 00:19:46.202 "hdgst": false, 00:19:46.202 "ddgst": false, 00:19:46.202 "multipath": "failover", 00:19:46.202 "method": "bdev_nvme_attach_controller", 00:19:46.202 "req_id": 1 00:19:46.202 } 00:19:46.202 Got JSON-RPC error response 00:19:46.202 response: 00:19:46.202 { 00:19:46.202 "code": -114, 00:19:46.202 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:46.202 } 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:46.202 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.202 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:46.459 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:46.459 20:47:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:47.830 0 00:19:47.830 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:47.830 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.830 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:47.830 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.830 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1636420 00:19:47.830 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1636420 ']' 00:19:47.830 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1636420 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1636420 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1636420' 00:19:47.831 killing process with pid 1636420 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1636420 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1636420 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.831 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:19:48.089 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:48.089 [2024-07-24 20:47:40.956098] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:48.089 [2024-07-24 20:47:40.956183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636420 ] 00:19:48.089 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.089 [2024-07-24 20:47:41.015085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.089 [2024-07-24 20:47:41.122739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.089 [2024-07-24 20:47:41.946503] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 06e254ba-3f0c-418d-b3bb-11c24d919c42 already exists 00:19:48.089 [2024-07-24 20:47:41.946558] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:06e254ba-3f0c-418d-b3bb-11c24d919c42 alias for bdev NVMe1n1 00:19:48.089 [2024-07-24 20:47:41.946574] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:48.089 Running I/O for 1 seconds... 00:19:48.089 00:19:48.089 Latency(us) 00:19:48.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.089 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:48.089 NVMe0n1 : 1.00 19124.99 74.71 0.00 0.00 6681.54 5825.42 13981.01 00:19:48.089 =================================================================================================================== 00:19:48.089 Total : 19124.99 74.71 0.00 0.00 6681.54 5825.42 13981.01 00:19:48.089 Received shutdown signal, test time was about 1.000000 seconds 00:19:48.089 00:19:48.089 Latency(us) 00:19:48.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.089 =================================================================================================================== 00:19:48.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.089 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:48.089 rmmod nvme_tcp 00:19:48.089 rmmod nvme_fabrics 00:19:48.089 rmmod nvme_keyring 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1636278 ']' 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1636278 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1636278 ']' 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1636278 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1636278 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1636278' 00:19:48.089 killing process with pid 1636278 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1636278 00:19:48.089 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1636278 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.348 20:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.879 00:19:50.879 real 0m7.676s 00:19:50.879 user 0m12.604s 00:19:50.879 sys 0m2.260s 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.879 ************************************ 00:19:50.879 END TEST nvmf_multicontroller 00:19:50.879 ************************************ 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.879 ************************************ 00:19:50.879 START TEST nvmf_aer 00:19:50.879 ************************************ 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:50.879 * Looking for test storage... 00:19:50.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.879 20:47:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:52.790 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:52.790 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:52.790 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:52.790 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:52.790 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.791 20:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:19:52.791 00:19:52.791 --- 10.0.0.2 ping statistics --- 00:19:52.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.791 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:19:52.791 00:19:52.791 --- 10.0.0.1 ping statistics --- 00:19:52.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.791 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1638639 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1638639 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1638639 ']' 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.791 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.791 [2024-07-24 20:47:48.125872] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:52.791 [2024-07-24 20:47:48.125952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.791 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.791 [2024-07-24 20:47:48.188691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.791 [2024-07-24 20:47:48.296940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.791 [2024-07-24 20:47:48.296990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.791 [2024-07-24 20:47:48.297004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.791 [2024-07-24 20:47:48.297016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.791 [2024-07-24 20:47:48.297025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.791 [2024-07-24 20:47:48.297110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.791 [2024-07-24 20:47:48.297176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.791 [2024-07-24 20:47:48.297250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.791 [2024-07-24 20:47:48.297251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 [2024-07-24 20:47:48.455788] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 Malloc0 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 [2024-07-24 20:47:48.509103] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 [ 00:19:53.049 { 00:19:53.049 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:53.049 "subtype": "Discovery", 00:19:53.049 "listen_addresses": [], 00:19:53.049 "allow_any_host": true, 00:19:53.049 "hosts": [] 00:19:53.049 }, 00:19:53.049 { 00:19:53.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.049 "subtype": "NVMe", 00:19:53.049 "listen_addresses": [ 00:19:53.049 { 00:19:53.049 "trtype": "TCP", 00:19:53.049 "adrfam": "IPv4", 00:19:53.049 "traddr": "10.0.0.2", 00:19:53.049 "trsvcid": "4420" 00:19:53.049 } 00:19:53.049 ], 00:19:53.049 "allow_any_host": true, 00:19:53.049 "hosts": [], 00:19:53.049 "serial_number": "SPDK00000000000001", 00:19:53.049 "model_number": "SPDK bdev Controller", 00:19:53.049 "max_namespaces": 2, 00:19:53.049 "min_cntlid": 1, 00:19:53.049 "max_cntlid": 65519, 00:19:53.049 "namespaces": [ 00:19:53.049 { 00:19:53.049 "nsid": 1, 00:19:53.049 "bdev_name": "Malloc0", 00:19:53.049 "name": "Malloc0", 00:19:53.049 "nguid": "693F2C4FD25E40C2AB4C3856A48745D1", 00:19:53.049 "uuid": "693f2c4f-d25e-40c2-ab4c-3856a48745d1" 00:19:53.049 } 00:19:53.049 ] 00:19:53.049 } 00:19:53.049 ] 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:53.049 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1638667 00:19:53.050 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:53.050 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:53.050 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:53.050 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.050 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:53.050 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:53.050 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:53.050 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.308 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 Malloc1 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 [ 00:19:53.566 { 00:19:53.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:53.566 "subtype": "Discovery", 00:19:53.566 "listen_addresses": [], 00:19:53.566 "allow_any_host": true, 00:19:53.566 "hosts": [] 00:19:53.566 }, 00:19:53.566 { 00:19:53.566 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.566 "subtype": "NVMe", 00:19:53.566 "listen_addresses": [ 00:19:53.566 { 00:19:53.566 "trtype": "TCP", 00:19:53.566 "adrfam": "IPv4", 00:19:53.566 "traddr": "10.0.0.2", 00:19:53.566 "trsvcid": "4420" 00:19:53.566 } 00:19:53.566 ], 00:19:53.566 "allow_any_host": true, 00:19:53.566 "hosts": [], 00:19:53.566 "serial_number": "SPDK00000000000001", 00:19:53.566 "model_number": "SPDK bdev Controller", 00:19:53.566 "max_namespaces": 2, 00:19:53.566 "min_cntlid": 1, 00:19:53.566 "max_cntlid": 65519, 00:19:53.566 "namespaces": [ 00:19:53.566 { 00:19:53.566 "nsid": 1, 00:19:53.566 "bdev_name": "Malloc0", 00:19:53.566 "name": "Malloc0", 00:19:53.566 "nguid": "693F2C4FD25E40C2AB4C3856A48745D1", 00:19:53.566 "uuid": "693f2c4f-d25e-40c2-ab4c-3856a48745d1" 00:19:53.566 }, 00:19:53.566 { 00:19:53.566 "nsid": 2, 00:19:53.566 "bdev_name": "Malloc1", 00:19:53.566 "name": "Malloc1", 00:19:53.566 "nguid": "CD30EFFD85CA4EA78B4714E875D8951F", 00:19:53.566 "uuid": "cd30effd-85ca-4ea7-8b47-14e875d8951f" 00:19:53.566 } 00:19:53.566 ] 00:19:53.566 } 00:19:53.566 ] 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1638667 00:19:53.566 Asynchronous Event Request test 00:19:53.566 Attaching to 10.0.0.2 00:19:53.566 Attached to 10.0.0.2 00:19:53.566 Registering asynchronous event callbacks... 00:19:53.566 Starting namespace attribute notice tests for all controllers... 00:19:53.566 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:53.566 aer_cb - Changed Namespace 00:19:53.566 Cleaning up... 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.566 20:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.566 rmmod nvme_tcp 00:19:53.566 rmmod nvme_fabrics 00:19:53.566 rmmod nvme_keyring 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1638639 ']' 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1638639 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1638639 ']' 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1638639 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1638639 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1638639' 00:19:53.566 killing process with pid 1638639 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1638639 00:19:53.566 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1638639 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.825 20:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.354 00:19:56.354 real 0m5.504s 00:19:56.354 user 0m4.744s 00:19:56.354 sys 0m1.903s 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.354 ************************************ 00:19:56.354 END TEST nvmf_aer 00:19:56.354 ************************************ 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.354 ************************************ 00:19:56.354 START TEST nvmf_async_init 00:19:56.354 ************************************ 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:56.354 * Looking for test storage... 00:19:56.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.354 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9040eb6f680f4a6cbde0b7a45b56eb23 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.355 20:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:58.256 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:58.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:58.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:58.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:19:58.256 00:19:58.256 --- 10.0.0.2 ping statistics --- 00:19:58.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.256 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:19:58.256 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:19:58.257 00:19:58.257 --- 10.0.0.1 ping statistics --- 00:19:58.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.257 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1640720 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1640720 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1640720 ']' 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.257 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.257 [2024-07-24 20:47:53.652657] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:19:58.257 [2024-07-24 20:47:53.652738] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.257 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.257 [2024-07-24 20:47:53.714867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.257 [2024-07-24 20:47:53.820247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.257 [2024-07-24 20:47:53.820298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.257 [2024-07-24 20:47:53.820313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.257 [2024-07-24 20:47:53.820325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.257 [2024-07-24 20:47:53.820335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.257 [2024-07-24 20:47:53.820369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 [2024-07-24 20:47:53.963708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 null0 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9040eb6f680f4a6cbde0b7a45b56eb23 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.514 20:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 [2024-07-24 20:47:54.003941] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.514 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.514 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:58.514 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.514 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.771 nvme0n1 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.771 [ 00:19:58.771 { 00:19:58.771 "name": "nvme0n1", 00:19:58.771 "aliases": [ 00:19:58.771 "9040eb6f-680f-4a6c-bde0-b7a45b56eb23" 00:19:58.771 ], 00:19:58.771 "product_name": "NVMe disk", 00:19:58.771 "block_size": 512, 00:19:58.771 "num_blocks": 2097152, 00:19:58.771 "uuid": "9040eb6f-680f-4a6c-bde0-b7a45b56eb23", 00:19:58.771 "assigned_rate_limits": { 00:19:58.771 "rw_ios_per_sec": 0, 00:19:58.771 "rw_mbytes_per_sec": 0, 00:19:58.771 "r_mbytes_per_sec": 0, 00:19:58.771 "w_mbytes_per_sec": 0 00:19:58.771 }, 00:19:58.771 "claimed": false, 00:19:58.771 "zoned": false, 00:19:58.771 "supported_io_types": { 00:19:58.771 "read": true, 00:19:58.771 "write": true, 00:19:58.771 "unmap": false, 00:19:58.771 "flush": true, 00:19:58.771 "reset": true, 00:19:58.771 "nvme_admin": true, 00:19:58.771 "nvme_io": true, 00:19:58.771 "nvme_io_md": false, 00:19:58.771 "write_zeroes": true, 00:19:58.771 "zcopy": false, 00:19:58.771 "get_zone_info": false, 00:19:58.771 "zone_management": false, 00:19:58.771 "zone_append": false, 00:19:58.771 "compare": true, 00:19:58.771 "compare_and_write": true, 00:19:58.771 "abort": true, 00:19:58.771 "seek_hole": false, 00:19:58.771 "seek_data": false, 00:19:58.771 "copy": true, 00:19:58.771 "nvme_iov_md": false 00:19:58.771 }, 00:19:58.771 "memory_domains": [ 00:19:58.771 { 00:19:58.771 "dma_device_id": "system", 00:19:58.771 "dma_device_type": 1 00:19:58.771 } 00:19:58.771 ], 00:19:58.771 "driver_specific": { 00:19:58.771 "nvme": [ 00:19:58.771 { 00:19:58.771 "trid": { 00:19:58.771 "trtype": "TCP", 00:19:58.771 "adrfam": "IPv4", 00:19:58.771 "traddr": "10.0.0.2", 00:19:58.771 "trsvcid": "4420", 00:19:58.771 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:58.771 }, 00:19:58.771 "ctrlr_data": { 00:19:58.771 "cntlid": 1, 00:19:58.771 "vendor_id": "0x8086", 00:19:58.771 "model_number": "SPDK bdev Controller", 00:19:58.771 "serial_number": "00000000000000000000", 00:19:58.771 "firmware_revision": "24.09", 00:19:58.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.771 "oacs": { 00:19:58.771 "security": 0, 00:19:58.771 "format": 0, 00:19:58.771 "firmware": 0, 00:19:58.771 "ns_manage": 0 00:19:58.771 }, 00:19:58.771 "multi_ctrlr": true, 00:19:58.771 "ana_reporting": false 00:19:58.771 }, 00:19:58.771 "vs": { 00:19:58.771 "nvme_version": "1.3" 00:19:58.771 }, 00:19:58.771 "ns_data": { 00:19:58.771 "id": 1, 00:19:58.771 "can_share": true 00:19:58.771 } 00:19:58.771 } 00:19:58.771 ], 00:19:58.771 "mp_policy": "active_passive" 00:19:58.771 } 00:19:58.771 } 00:19:58.771 ] 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.771 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:58.771 [2024-07-24 20:47:54.257269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.771 [2024-07-24 20:47:54.257361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c2210 (9): Bad file descriptor 00:19:59.030 [2024-07-24 20:47:54.399408] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.030 [ 00:19:59.030 { 00:19:59.030 "name": "nvme0n1", 00:19:59.030 "aliases": [ 00:19:59.030 "9040eb6f-680f-4a6c-bde0-b7a45b56eb23" 00:19:59.030 ], 00:19:59.030 "product_name": "NVMe disk", 00:19:59.030 "block_size": 512, 00:19:59.030 "num_blocks": 2097152, 00:19:59.030 "uuid": "9040eb6f-680f-4a6c-bde0-b7a45b56eb23", 00:19:59.030 "assigned_rate_limits": { 00:19:59.030 "rw_ios_per_sec": 0, 00:19:59.030 "rw_mbytes_per_sec": 0, 00:19:59.030 "r_mbytes_per_sec": 0, 00:19:59.030 "w_mbytes_per_sec": 0 00:19:59.030 }, 00:19:59.030 "claimed": false, 00:19:59.030 "zoned": false, 00:19:59.030 "supported_io_types": { 00:19:59.030 "read": true, 00:19:59.030 "write": true, 00:19:59.030 "unmap": false, 00:19:59.030 "flush": true, 00:19:59.030 "reset": true, 00:19:59.030 "nvme_admin": true, 00:19:59.030 "nvme_io": true, 00:19:59.030 "nvme_io_md": false, 00:19:59.030 "write_zeroes": true, 00:19:59.030 "zcopy": false, 00:19:59.030 "get_zone_info": false, 00:19:59.030 "zone_management": false, 00:19:59.030 "zone_append": false, 00:19:59.030 "compare": true, 00:19:59.030 "compare_and_write": true, 00:19:59.030 "abort": true, 00:19:59.030 "seek_hole": false, 00:19:59.030 "seek_data": false, 00:19:59.030 "copy": true, 00:19:59.030 "nvme_iov_md": false 00:19:59.030 }, 00:19:59.030 "memory_domains": [ 00:19:59.030 { 00:19:59.030 "dma_device_id": "system", 00:19:59.030 "dma_device_type": 1 00:19:59.030 } 00:19:59.030 ], 00:19:59.030 "driver_specific": { 00:19:59.030 "nvme": [ 00:19:59.030 { 00:19:59.030 "trid": { 00:19:59.030 "trtype": "TCP", 00:19:59.030 "adrfam": "IPv4", 00:19:59.030 "traddr": "10.0.0.2", 00:19:59.030 "trsvcid": "4420", 00:19:59.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:59.030 }, 00:19:59.030 "ctrlr_data": { 00:19:59.030 "cntlid": 2, 00:19:59.030 "vendor_id": "0x8086", 00:19:59.030 "model_number": "SPDK bdev Controller", 00:19:59.030 "serial_number": "00000000000000000000", 00:19:59.030 "firmware_revision": "24.09", 00:19:59.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.030 "oacs": { 00:19:59.030 "security": 0, 00:19:59.030 "format": 0, 00:19:59.030 "firmware": 0, 00:19:59.030 "ns_manage": 0 00:19:59.030 }, 00:19:59.030 "multi_ctrlr": true, 00:19:59.030 "ana_reporting": false 00:19:59.030 }, 00:19:59.030 "vs": { 00:19:59.030 "nvme_version": "1.3" 00:19:59.030 }, 00:19:59.030 "ns_data": { 00:19:59.030 "id": 1, 00:19:59.030 "can_share": true 00:19:59.030 } 00:19:59.030 } 00:19:59.030 ], 00:19:59.030 "mp_policy": "active_passive" 00:19:59.030 } 00:19:59.030 } 00:19:59.030 ] 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.F1pKRXg7XJ 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.F1pKRXg7XJ 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.030 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.031 [2024-07-24 20:47:54.453942] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.031 [2024-07-24 20:47:54.454074] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F1pKRXg7XJ 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.031 [2024-07-24 20:47:54.461956] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F1pKRXg7XJ 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.031 [2024-07-24 20:47:54.469982] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.031 [2024-07-24 20:47:54.470042] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:59.031 nvme0n1 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.031 [ 00:19:59.031 { 00:19:59.031 "name": "nvme0n1", 00:19:59.031 "aliases": [ 00:19:59.031 "9040eb6f-680f-4a6c-bde0-b7a45b56eb23" 00:19:59.031 ], 00:19:59.031 "product_name": "NVMe disk", 00:19:59.031 "block_size": 512, 00:19:59.031 "num_blocks": 2097152, 00:19:59.031 "uuid": "9040eb6f-680f-4a6c-bde0-b7a45b56eb23", 00:19:59.031 "assigned_rate_limits": { 00:19:59.031 "rw_ios_per_sec": 0, 00:19:59.031 "rw_mbytes_per_sec": 0, 00:19:59.031 "r_mbytes_per_sec": 0, 00:19:59.031 "w_mbytes_per_sec": 0 00:19:59.031 }, 00:19:59.031 "claimed": false, 00:19:59.031 "zoned": false, 00:19:59.031 "supported_io_types": { 00:19:59.031 "read": true, 00:19:59.031 "write": true, 00:19:59.031 "unmap": false, 00:19:59.031 "flush": true, 00:19:59.031 "reset": true, 00:19:59.031 "nvme_admin": true, 00:19:59.031 "nvme_io": true, 00:19:59.031 "nvme_io_md": false, 00:19:59.031 "write_zeroes": true, 00:19:59.031 "zcopy": false, 00:19:59.031 "get_zone_info": false, 00:19:59.031 "zone_management": false, 00:19:59.031 "zone_append": false, 00:19:59.031 "compare": true, 00:19:59.031 "compare_and_write": true, 00:19:59.031 "abort": true, 00:19:59.031 "seek_hole": false, 00:19:59.031 "seek_data": false, 00:19:59.031 "copy": true, 00:19:59.031 "nvme_iov_md": false 00:19:59.031 }, 00:19:59.031 "memory_domains": [ 00:19:59.031 { 00:19:59.031 "dma_device_id": "system", 00:19:59.031 "dma_device_type": 1 00:19:59.031 } 00:19:59.031 ], 00:19:59.031 "driver_specific": { 00:19:59.031 "nvme": [ 00:19:59.031 { 00:19:59.031 "trid": { 00:19:59.031 "trtype": "TCP", 00:19:59.031 "adrfam": "IPv4", 00:19:59.031 "traddr": "10.0.0.2", 00:19:59.031 "trsvcid": "4421", 00:19:59.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:59.031 }, 00:19:59.031 "ctrlr_data": { 00:19:59.031 "cntlid": 3, 00:19:59.031 "vendor_id": "0x8086", 00:19:59.031 "model_number": "SPDK bdev Controller", 00:19:59.031 "serial_number": "00000000000000000000", 00:19:59.031 "firmware_revision": "24.09", 00:19:59.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.031 "oacs": { 00:19:59.031 "security": 0, 00:19:59.031 "format": 0, 00:19:59.031 "firmware": 0, 00:19:59.031 "ns_manage": 0 00:19:59.031 }, 00:19:59.031 "multi_ctrlr": true, 00:19:59.031 "ana_reporting": false 00:19:59.031 }, 00:19:59.031 "vs": { 00:19:59.031 "nvme_version": "1.3" 00:19:59.031 }, 00:19:59.031 "ns_data": { 00:19:59.031 "id": 1, 00:19:59.031 "can_share": true 00:19:59.031 } 00:19:59.031 } 00:19:59.031 ], 00:19:59.031 "mp_policy": "active_passive" 00:19:59.031 } 00:19:59.031 } 00:19:59.031 ] 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.F1pKRXg7XJ 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.031 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.031 rmmod nvme_tcp 00:19:59.325 rmmod nvme_fabrics 00:19:59.325 rmmod nvme_keyring 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1640720 ']' 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1640720 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1640720 ']' 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1640720 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1640720 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1640720' 00:19:59.325 killing process with pid 1640720 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1640720 00:19:59.325 [2024-07-24 20:47:54.659237] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:59.325 [2024-07-24 20:47:54.659305] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:59.325 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1640720 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.583 20:47:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.482 20:47:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.482 00:20:01.482 real 0m5.501s 00:20:01.482 user 0m2.102s 00:20:01.482 sys 0m1.783s 00:20:01.482 20:47:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.482 20:47:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.482 ************************************ 00:20:01.482 END TEST nvmf_async_init 00:20:01.482 ************************************ 00:20:01.482 20:47:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:01.482 20:47:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:01.482 20:47:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.482 20:47:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.482 ************************************ 00:20:01.482 START TEST dma 00:20:01.482 ************************************ 00:20:01.482 20:47:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:01.740 * Looking for test storage... 00:20:01.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.740 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:01.741 00:20:01.741 real 0m0.068s 00:20:01.741 user 0m0.035s 00:20:01.741 sys 0m0.038s 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:01.741 ************************************ 00:20:01.741 END TEST dma 00:20:01.741 ************************************ 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.741 ************************************ 00:20:01.741 START TEST nvmf_identify 00:20:01.741 ************************************ 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:01.741 * Looking for test storage... 00:20:01.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.741 20:47:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:03.638 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:03.638 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:03.638 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:03.638 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.638 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.639 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:20:03.897 00:20:03.897 --- 10.0.0.2 ping statistics --- 00:20:03.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.897 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:20:03.897 00:20:03.897 --- 10.0.0.1 ping statistics --- 00:20:03.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.897 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.897 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1642847 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1642847 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1642847 ']' 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.898 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:03.898 [2024-07-24 20:47:59.299854] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:20:03.898 [2024-07-24 20:47:59.299935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.898 [2024-07-24 20:47:59.369676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.156 [2024-07-24 20:47:59.489708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.156 [2024-07-24 20:47:59.489769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.156 [2024-07-24 20:47:59.489786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.156 [2024-07-24 20:47:59.489799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.156 [2024-07-24 20:47:59.489810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.156 [2024-07-24 20:47:59.489869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.156 [2024-07-24 20:47:59.489924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.156 [2024-07-24 20:47:59.490041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.156 [2024-07-24 20:47:59.490044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 [2024-07-24 20:47:59.624708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 Malloc0 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 [2024-07-24 20:47:59.700179] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.156 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 [ 00:20:04.156 { 00:20:04.156 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:04.156 "subtype": "Discovery", 00:20:04.156 "listen_addresses": [ 00:20:04.156 { 00:20:04.156 "trtype": "TCP", 00:20:04.156 "adrfam": "IPv4", 00:20:04.156 "traddr": "10.0.0.2", 00:20:04.156 "trsvcid": "4420" 00:20:04.156 } 00:20:04.156 ], 00:20:04.156 "allow_any_host": true, 00:20:04.156 "hosts": [] 00:20:04.156 }, 00:20:04.156 { 00:20:04.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.156 "subtype": "NVMe", 00:20:04.156 "listen_addresses": [ 00:20:04.156 { 00:20:04.156 "trtype": "TCP", 00:20:04.156 "adrfam": "IPv4", 00:20:04.156 "traddr": "10.0.0.2", 00:20:04.156 "trsvcid": "4420" 00:20:04.156 } 00:20:04.156 ], 00:20:04.156 "allow_any_host": true, 00:20:04.156 "hosts": [], 00:20:04.156 "serial_number": "SPDK00000000000001", 00:20:04.156 "model_number": "SPDK bdev Controller", 00:20:04.156 "max_namespaces": 32, 00:20:04.156 "min_cntlid": 1, 00:20:04.156 "max_cntlid": 65519, 00:20:04.156 "namespaces": [ 00:20:04.156 { 00:20:04.156 "nsid": 1, 00:20:04.416 "bdev_name": "Malloc0", 00:20:04.416 "name": "Malloc0", 00:20:04.416 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:04.416 "eui64": "ABCDEF0123456789", 00:20:04.416 "uuid": "b22e3b46-1405-47ae-b7cb-23e25a41ac96" 00:20:04.416 } 00:20:04.416 ] 00:20:04.416 } 00:20:04.416 ] 00:20:04.416 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.416 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:04.416 [2024-07-24 20:47:59.741348] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:20:04.416 [2024-07-24 20:47:59.741395] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642873 ] 00:20:04.416 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.416 [2024-07-24 20:47:59.777470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:04.416 [2024-07-24 20:47:59.777554] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:04.416 [2024-07-24 20:47:59.777565] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:04.416 [2024-07-24 20:47:59.777578] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:04.416 [2024-07-24 20:47:59.777591] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:04.416 [2024-07-24 20:47:59.777925] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:04.416 [2024-07-24 20:47:59.777979] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1243540 0 00:20:04.416 [2024-07-24 20:47:59.784259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:04.416 [2024-07-24 20:47:59.784295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:04.416 [2024-07-24 20:47:59.784305] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:04.416 [2024-07-24 20:47:59.784311] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:04.416 [2024-07-24 20:47:59.784406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.416 [2024-07-24 20:47:59.784419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.416 [2024-07-24 20:47:59.784426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.416 [2024-07-24 20:47:59.784442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:04.416 [2024-07-24 20:47:59.784474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.416 [2024-07-24 20:47:59.795256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.416 [2024-07-24 20:47:59.795275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.416 [2024-07-24 20:47:59.795282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.416 [2024-07-24 20:47:59.795299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.416 [2024-07-24 20:47:59.795319] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:04.417 [2024-07-24 20:47:59.795331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:04.417 [2024-07-24 20:47:59.795340] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:04.417 [2024-07-24 20:47:59.795361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.417 [2024-07-24 20:47:59.795387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.417 [2024-07-24 20:47:59.795411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.417 [2024-07-24 20:47:59.795579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.417 [2024-07-24 20:47:59.795592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.417 [2024-07-24 20:47:59.795599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.417 [2024-07-24 20:47:59.795619] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:04.417 [2024-07-24 20:47:59.795633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:04.417 [2024-07-24 20:47:59.795645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.417 [2024-07-24 20:47:59.795669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.417 [2024-07-24 20:47:59.795690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.417 [2024-07-24 20:47:59.795793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.417 [2024-07-24 20:47:59.795808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.417 [2024-07-24 20:47:59.795815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.417 [2024-07-24 20:47:59.795830] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:04.417 [2024-07-24 20:47:59.795845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:04.417 [2024-07-24 20:47:59.795857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.795871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.417 [2024-07-24 20:47:59.795881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.417 [2024-07-24 20:47:59.795906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.417 [2024-07-24 20:47:59.796007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.417 [2024-07-24 20:47:59.796019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.417 [2024-07-24 20:47:59.796026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.417 [2024-07-24 20:47:59.796041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:04.417 [2024-07-24 20:47:59.796057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.417 [2024-07-24 20:47:59.796083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.417 [2024-07-24 20:47:59.796103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.417 [2024-07-24 20:47:59.796205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.417 [2024-07-24 20:47:59.796217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.417 [2024-07-24 20:47:59.796224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.417 [2024-07-24 20:47:59.796238] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:04.417 [2024-07-24 20:47:59.796257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:04.417 [2024-07-24 20:47:59.796271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:04.417 [2024-07-24 20:47:59.796387] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:04.417 [2024-07-24 20:47:59.796396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:04.417 [2024-07-24 20:47:59.796409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.417 [2024-07-24 20:47:59.796449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.417 [2024-07-24 20:47:59.796472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.417 [2024-07-24 20:47:59.796640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.417 [2024-07-24 20:47:59.796656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.417 [2024-07-24 20:47:59.796663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.417 [2024-07-24 20:47:59.796678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:04.417 [2024-07-24 20:47:59.796694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.417 [2024-07-24 20:47:59.796724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.417 [2024-07-24 20:47:59.796746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.417 [2024-07-24 20:47:59.796845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.417 [2024-07-24 20:47:59.796860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.417 [2024-07-24 20:47:59.796867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.417 [2024-07-24 20:47:59.796881] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:04.417 [2024-07-24 20:47:59.796890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:04.417 [2024-07-24 20:47:59.796903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:04.417 [2024-07-24 20:47:59.796922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:04.417 [2024-07-24 20:47:59.796938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.796946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.417 [2024-07-24 20:47:59.796956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.417 [2024-07-24 20:47:59.796977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.417 [2024-07-24 20:47:59.797120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.417 [2024-07-24 20:47:59.797135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.417 [2024-07-24 20:47:59.797142] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.417 [2024-07-24 20:47:59.797149] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1243540): datao=0, datal=4096, cccid=0 00:20:04.417 [2024-07-24 20:47:59.797156] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a33c0) on tqpair(0x1243540): expected_datao=0, payload_size=4096 00:20:04.417 [2024-07-24 20:47:59.797164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797175] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797184] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.418 [2024-07-24 20:47:59.797205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.418 [2024-07-24 20:47:59.797212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.418 [2024-07-24 20:47:59.797230] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:04.418 [2024-07-24 20:47:59.797239] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:04.418 [2024-07-24 20:47:59.797255] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:04.418 [2024-07-24 20:47:59.797264] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:04.418 [2024-07-24 20:47:59.797272] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:04.418 [2024-07-24 20:47:59.797280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:04.418 [2024-07-24 20:47:59.797303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:04.418 [2024-07-24 20:47:59.797320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.797346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.418 [2024-07-24 20:47:59.797367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.418 [2024-07-24 20:47:59.797477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.418 [2024-07-24 20:47:59.797492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.418 [2024-07-24 20:47:59.797499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.418 [2024-07-24 20:47:59.797517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.797541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.418 [2024-07-24 20:47:59.797552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.797574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.418 [2024-07-24 20:47:59.797583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.797604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.418 [2024-07-24 20:47:59.797614] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.797635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.418 [2024-07-24 20:47:59.797644] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:04.418 [2024-07-24 20:47:59.797677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:04.418 [2024-07-24 20:47:59.797690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.797697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.797707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.418 [2024-07-24 20:47:59.797729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a33c0, cid 0, qid 0 00:20:04.418 [2024-07-24 20:47:59.797753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3540, cid 1, qid 0 00:20:04.418 [2024-07-24 20:47:59.797764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a36c0, cid 2, qid 0 00:20:04.418 [2024-07-24 20:47:59.797772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.418 [2024-07-24 20:47:59.797779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a39c0, cid 4, qid 0 00:20:04.418 [2024-07-24 20:47:59.798008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.418 [2024-07-24 20:47:59.798020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.418 [2024-07-24 20:47:59.798027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.798034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a39c0) on tqpair=0x1243540 00:20:04.418 [2024-07-24 20:47:59.798042] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:04.418 [2024-07-24 20:47:59.798051] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:04.418 [2024-07-24 20:47:59.798068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.798078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.798088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.418 [2024-07-24 20:47:59.798124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a39c0, cid 4, qid 0 00:20:04.418 [2024-07-24 20:47:59.798340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.418 [2024-07-24 20:47:59.798354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.418 [2024-07-24 20:47:59.798361] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.798368] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1243540): datao=0, datal=4096, cccid=4 00:20:04.418 [2024-07-24 20:47:59.798375] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a39c0) on tqpair(0x1243540): expected_datao=0, payload_size=4096 00:20:04.418 [2024-07-24 20:47:59.798383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.798399] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.798408] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.843271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.418 [2024-07-24 20:47:59.843289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.418 [2024-07-24 20:47:59.843296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.843303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a39c0) on tqpair=0x1243540 00:20:04.418 [2024-07-24 20:47:59.843322] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:04.418 [2024-07-24 20:47:59.843361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.843372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.843383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.418 [2024-07-24 20:47:59.843394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.843401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.418 [2024-07-24 20:47:59.843407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1243540) 00:20:04.418 [2024-07-24 20:47:59.843416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.418 [2024-07-24 20:47:59.843443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a39c0, cid 4, qid 0 00:20:04.418 [2024-07-24 20:47:59.843473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3b40, cid 5, qid 0 00:20:04.418 [2024-07-24 20:47:59.843640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.419 [2024-07-24 20:47:59.843652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.419 [2024-07-24 20:47:59.843659] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.843665] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1243540): datao=0, datal=1024, cccid=4 00:20:04.419 [2024-07-24 20:47:59.843672] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a39c0) on tqpair(0x1243540): expected_datao=0, payload_size=1024 00:20:04.419 [2024-07-24 20:47:59.843680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.843690] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.843697] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.843706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.419 [2024-07-24 20:47:59.843715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.419 [2024-07-24 20:47:59.843721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.843728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3b40) on tqpair=0x1243540 00:20:04.419 [2024-07-24 20:47:59.884402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.419 [2024-07-24 20:47:59.884421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.419 [2024-07-24 20:47:59.884429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a39c0) on tqpair=0x1243540 00:20:04.419 [2024-07-24 20:47:59.884454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1243540) 00:20:04.419 [2024-07-24 20:47:59.884475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.419 [2024-07-24 20:47:59.884505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a39c0, cid 4, qid 0 00:20:04.419 [2024-07-24 20:47:59.884632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.419 [2024-07-24 20:47:59.884644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.419 [2024-07-24 20:47:59.884651] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884658] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1243540): datao=0, datal=3072, cccid=4 00:20:04.419 [2024-07-24 20:47:59.884665] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a39c0) on tqpair(0x1243540): expected_datao=0, payload_size=3072 00:20:04.419 [2024-07-24 20:47:59.884673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884683] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884691] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.419 [2024-07-24 20:47:59.884712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.419 [2024-07-24 20:47:59.884719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a39c0) on tqpair=0x1243540 00:20:04.419 [2024-07-24 20:47:59.884740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1243540) 00:20:04.419 [2024-07-24 20:47:59.884759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.419 [2024-07-24 20:47:59.884787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a39c0, cid 4, qid 0 00:20:04.419 [2024-07-24 20:47:59.884903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.419 [2024-07-24 20:47:59.884916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.419 [2024-07-24 20:47:59.884923] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884929] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1243540): datao=0, datal=8, cccid=4 00:20:04.419 [2024-07-24 20:47:59.884937] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a39c0) on tqpair(0x1243540): expected_datao=0, payload_size=8 00:20:04.419 [2024-07-24 20:47:59.884944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884954] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.884961] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.925401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.419 [2024-07-24 20:47:59.925419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.419 [2024-07-24 20:47:59.925427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.419 [2024-07-24 20:47:59.925434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a39c0) on tqpair=0x1243540 00:20:04.419 ===================================================== 00:20:04.419 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:04.419 ===================================================== 00:20:04.419 Controller Capabilities/Features 00:20:04.419 ================================ 00:20:04.419 Vendor ID: 0000 00:20:04.419 Subsystem Vendor ID: 0000 00:20:04.419 Serial Number: .................... 00:20:04.419 Model Number: ........................................ 00:20:04.419 Firmware Version: 24.09 00:20:04.419 Recommended Arb Burst: 0 00:20:04.419 IEEE OUI Identifier: 00 00 00 00:20:04.419 Multi-path I/O 00:20:04.419 May have multiple subsystem ports: No 00:20:04.419 May have multiple controllers: No 00:20:04.419 Associated with SR-IOV VF: No 00:20:04.419 Max Data Transfer Size: 131072 00:20:04.419 Max Number of Namespaces: 0 00:20:04.419 Max Number of I/O Queues: 1024 00:20:04.419 NVMe Specification Version (VS): 1.3 00:20:04.419 NVMe Specification Version (Identify): 1.3 00:20:04.419 Maximum Queue Entries: 128 00:20:04.419 Contiguous Queues Required: Yes 00:20:04.419 Arbitration Mechanisms Supported 00:20:04.419 Weighted Round Robin: Not Supported 00:20:04.419 Vendor Specific: Not Supported 00:20:04.419 Reset Timeout: 15000 ms 00:20:04.419 Doorbell Stride: 4 bytes 00:20:04.419 NVM Subsystem Reset: Not Supported 00:20:04.419 Command Sets Supported 00:20:04.419 NVM Command Set: Supported 00:20:04.419 Boot Partition: Not Supported 00:20:04.419 Memory Page Size Minimum: 4096 bytes 00:20:04.419 Memory Page Size Maximum: 4096 bytes 00:20:04.419 Persistent Memory Region: Not Supported 00:20:04.419 Optional Asynchronous Events Supported 00:20:04.419 Namespace Attribute Notices: Not Supported 00:20:04.419 Firmware Activation Notices: Not Supported 00:20:04.419 ANA Change Notices: Not Supported 00:20:04.419 PLE Aggregate Log Change Notices: Not Supported 00:20:04.419 LBA Status Info Alert Notices: Not Supported 00:20:04.419 EGE Aggregate Log Change Notices: Not Supported 00:20:04.419 Normal NVM Subsystem Shutdown event: Not Supported 00:20:04.419 Zone Descriptor Change Notices: Not Supported 00:20:04.419 Discovery Log Change Notices: Supported 00:20:04.419 Controller Attributes 00:20:04.419 128-bit Host Identifier: Not Supported 00:20:04.419 Non-Operational Permissive Mode: Not Supported 00:20:04.419 NVM Sets: Not Supported 00:20:04.419 Read Recovery Levels: Not Supported 00:20:04.419 Endurance Groups: Not Supported 00:20:04.420 Predictable Latency Mode: Not Supported 00:20:04.420 Traffic Based Keep ALive: Not Supported 00:20:04.420 Namespace Granularity: Not Supported 00:20:04.420 SQ Associations: Not Supported 00:20:04.420 UUID List: Not Supported 00:20:04.420 Multi-Domain Subsystem: Not Supported 00:20:04.420 Fixed Capacity Management: Not Supported 00:20:04.420 Variable Capacity Management: Not Supported 00:20:04.420 Delete Endurance Group: Not Supported 00:20:04.420 Delete NVM Set: Not Supported 00:20:04.420 Extended LBA Formats Supported: Not Supported 00:20:04.420 Flexible Data Placement Supported: Not Supported 00:20:04.420 00:20:04.420 Controller Memory Buffer Support 00:20:04.420 ================================ 00:20:04.420 Supported: No 00:20:04.420 00:20:04.420 Persistent Memory Region Support 00:20:04.420 ================================ 00:20:04.420 Supported: No 00:20:04.420 00:20:04.420 Admin Command Set Attributes 00:20:04.420 ============================ 00:20:04.420 Security Send/Receive: Not Supported 00:20:04.420 Format NVM: Not Supported 00:20:04.420 Firmware Activate/Download: Not Supported 00:20:04.420 Namespace Management: Not Supported 00:20:04.420 Device Self-Test: Not Supported 00:20:04.420 Directives: Not Supported 00:20:04.420 NVMe-MI: Not Supported 00:20:04.420 Virtualization Management: Not Supported 00:20:04.420 Doorbell Buffer Config: Not Supported 00:20:04.420 Get LBA Status Capability: Not Supported 00:20:04.420 Command & Feature Lockdown Capability: Not Supported 00:20:04.420 Abort Command Limit: 1 00:20:04.420 Async Event Request Limit: 4 00:20:04.420 Number of Firmware Slots: N/A 00:20:04.420 Firmware Slot 1 Read-Only: N/A 00:20:04.420 Firmware Activation Without Reset: N/A 00:20:04.420 Multiple Update Detection Support: N/A 00:20:04.420 Firmware Update Granularity: No Information Provided 00:20:04.420 Per-Namespace SMART Log: No 00:20:04.420 Asymmetric Namespace Access Log Page: Not Supported 00:20:04.420 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:04.420 Command Effects Log Page: Not Supported 00:20:04.420 Get Log Page Extended Data: Supported 00:20:04.420 Telemetry Log Pages: Not Supported 00:20:04.420 Persistent Event Log Pages: Not Supported 00:20:04.420 Supported Log Pages Log Page: May Support 00:20:04.420 Commands Supported & Effects Log Page: Not Supported 00:20:04.420 Feature Identifiers & Effects Log Page:May Support 00:20:04.420 NVMe-MI Commands & Effects Log Page: May Support 00:20:04.420 Data Area 4 for Telemetry Log: Not Supported 00:20:04.420 Error Log Page Entries Supported: 128 00:20:04.420 Keep Alive: Not Supported 00:20:04.420 00:20:04.420 NVM Command Set Attributes 00:20:04.420 ========================== 00:20:04.420 Submission Queue Entry Size 00:20:04.420 Max: 1 00:20:04.420 Min: 1 00:20:04.420 Completion Queue Entry Size 00:20:04.420 Max: 1 00:20:04.420 Min: 1 00:20:04.420 Number of Namespaces: 0 00:20:04.420 Compare Command: Not Supported 00:20:04.420 Write Uncorrectable Command: Not Supported 00:20:04.420 Dataset Management Command: Not Supported 00:20:04.420 Write Zeroes Command: Not Supported 00:20:04.420 Set Features Save Field: Not Supported 00:20:04.420 Reservations: Not Supported 00:20:04.420 Timestamp: Not Supported 00:20:04.420 Copy: Not Supported 00:20:04.420 Volatile Write Cache: Not Present 00:20:04.420 Atomic Write Unit (Normal): 1 00:20:04.420 Atomic Write Unit (PFail): 1 00:20:04.420 Atomic Compare & Write Unit: 1 00:20:04.420 Fused Compare & Write: Supported 00:20:04.420 Scatter-Gather List 00:20:04.420 SGL Command Set: Supported 00:20:04.420 SGL Keyed: Supported 00:20:04.420 SGL Bit Bucket Descriptor: Not Supported 00:20:04.420 SGL Metadata Pointer: Not Supported 00:20:04.420 Oversized SGL: Not Supported 00:20:04.420 SGL Metadata Address: Not Supported 00:20:04.420 SGL Offset: Supported 00:20:04.420 Transport SGL Data Block: Not Supported 00:20:04.420 Replay Protected Memory Block: Not Supported 00:20:04.420 00:20:04.420 Firmware Slot Information 00:20:04.420 ========================= 00:20:04.420 Active slot: 0 00:20:04.420 00:20:04.420 00:20:04.420 Error Log 00:20:04.420 ========= 00:20:04.420 00:20:04.420 Active Namespaces 00:20:04.420 ================= 00:20:04.420 Discovery Log Page 00:20:04.420 ================== 00:20:04.420 Generation Counter: 2 00:20:04.420 Number of Records: 2 00:20:04.420 Record Format: 0 00:20:04.420 00:20:04.420 Discovery Log Entry 0 00:20:04.420 ---------------------- 00:20:04.420 Transport Type: 3 (TCP) 00:20:04.420 Address Family: 1 (IPv4) 00:20:04.420 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:04.420 Entry Flags: 00:20:04.420 Duplicate Returned Information: 1 00:20:04.420 Explicit Persistent Connection Support for Discovery: 1 00:20:04.420 Transport Requirements: 00:20:04.420 Secure Channel: Not Required 00:20:04.420 Port ID: 0 (0x0000) 00:20:04.420 Controller ID: 65535 (0xffff) 00:20:04.420 Admin Max SQ Size: 128 00:20:04.420 Transport Service Identifier: 4420 00:20:04.420 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:04.420 Transport Address: 10.0.0.2 00:20:04.420 Discovery Log Entry 1 00:20:04.420 ---------------------- 00:20:04.420 Transport Type: 3 (TCP) 00:20:04.420 Address Family: 1 (IPv4) 00:20:04.420 Subsystem Type: 2 (NVM Subsystem) 00:20:04.420 Entry Flags: 00:20:04.420 Duplicate Returned Information: 0 00:20:04.420 Explicit Persistent Connection Support for Discovery: 0 00:20:04.420 Transport Requirements: 00:20:04.420 Secure Channel: Not Required 00:20:04.420 Port ID: 0 (0x0000) 00:20:04.420 Controller ID: 65535 (0xffff) 00:20:04.420 Admin Max SQ Size: 128 00:20:04.420 Transport Service Identifier: 4420 00:20:04.420 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:04.420 Transport Address: 10.0.0.2 [2024-07-24 20:47:59.925555] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:04.420 [2024-07-24 20:47:59.925578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a33c0) on tqpair=0x1243540 00:20:04.420 [2024-07-24 20:47:59.925589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.420 [2024-07-24 20:47:59.925598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3540) on tqpair=0x1243540 00:20:04.420 [2024-07-24 20:47:59.925606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.420 [2024-07-24 20:47:59.925614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a36c0) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.925622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.421 [2024-07-24 20:47:59.925630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.925637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.421 [2024-07-24 20:47:59.925655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.925665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.925671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.925700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.925725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.925877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.925890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.925897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.925904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.925915] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.925924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.925930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.925940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.925971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.926089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.926104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.926111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.926126] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:04.421 [2024-07-24 20:47:59.926134] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:04.421 [2024-07-24 20:47:59.926149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.926175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.926196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.926319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.926333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.926340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.926364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.926390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.926411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.926514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.926526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.926533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.926555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.926581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.926602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.926700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.926715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.926722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.926745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.926775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.926796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.926894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.926909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.926915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.926938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.926954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.926965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.926986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.927086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.927098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.927105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.927111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.927127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.927136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.927143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.927153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.927173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.931266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.931283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.931291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.931297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.931315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.931324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.931331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1243540) 00:20:04.421 [2024-07-24 20:47:59.931341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.421 [2024-07-24 20:47:59.931363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a3840, cid 3, qid 0 00:20:04.421 [2024-07-24 20:47:59.931501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.421 [2024-07-24 20:47:59.931517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.421 [2024-07-24 20:47:59.931524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.421 [2024-07-24 20:47:59.931530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a3840) on tqpair=0x1243540 00:20:04.421 [2024-07-24 20:47:59.931544] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:20:04.421 00:20:04.421 20:47:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:04.421 [2024-07-24 20:47:59.963562] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:20:04.422 [2024-07-24 20:47:59.963610] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642878 ] 00:20:04.422 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.683 [2024-07-24 20:47:59.998447] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:04.683 [2024-07-24 20:47:59.998500] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:04.683 [2024-07-24 20:47:59.998511] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:04.683 [2024-07-24 20:47:59.998539] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:04.683 [2024-07-24 20:47:59.998551] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:04.683 [2024-07-24 20:48:00.002302] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:04.683 [2024-07-24 20:48:00.002348] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2462540 0 00:20:04.683 [2024-07-24 20:48:00.009253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:04.683 [2024-07-24 20:48:00.009278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:04.683 [2024-07-24 20:48:00.009297] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:04.683 [2024-07-24 20:48:00.009303] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:04.683 [2024-07-24 20:48:00.009344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.009356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.009363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.683 [2024-07-24 20:48:00.009378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:04.683 [2024-07-24 20:48:00.009404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.683 [2024-07-24 20:48:00.017263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.683 [2024-07-24 20:48:00.017297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.683 [2024-07-24 20:48:00.017304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.683 [2024-07-24 20:48:00.017329] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:04.683 [2024-07-24 20:48:00.017340] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:04.683 [2024-07-24 20:48:00.017351] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:04.683 [2024-07-24 20:48:00.017375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.683 [2024-07-24 20:48:00.017404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.683 [2024-07-24 20:48:00.017431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.683 [2024-07-24 20:48:00.017602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.683 [2024-07-24 20:48:00.017618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.683 [2024-07-24 20:48:00.017630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.683 [2024-07-24 20:48:00.017651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:04.683 [2024-07-24 20:48:00.017666] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:04.683 [2024-07-24 20:48:00.017679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.683 [2024-07-24 20:48:00.017706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.683 [2024-07-24 20:48:00.017727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.683 [2024-07-24 20:48:00.017880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.683 [2024-07-24 20:48:00.017896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.683 [2024-07-24 20:48:00.017903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.683 [2024-07-24 20:48:00.017918] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:04.683 [2024-07-24 20:48:00.017932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:04.683 [2024-07-24 20:48:00.017944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.017958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.683 [2024-07-24 20:48:00.017969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.683 [2024-07-24 20:48:00.017990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.683 [2024-07-24 20:48:00.018139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.683 [2024-07-24 20:48:00.018154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.683 [2024-07-24 20:48:00.018161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.018167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.683 [2024-07-24 20:48:00.018176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:04.683 [2024-07-24 20:48:00.018193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.018202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.018208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.683 [2024-07-24 20:48:00.018219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.683 [2024-07-24 20:48:00.018239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.683 [2024-07-24 20:48:00.018404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.683 [2024-07-24 20:48:00.018417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.683 [2024-07-24 20:48:00.018424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.018431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.683 [2024-07-24 20:48:00.018442] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:04.683 [2024-07-24 20:48:00.018452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:04.683 [2024-07-24 20:48:00.018465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:04.683 [2024-07-24 20:48:00.018575] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:04.683 [2024-07-24 20:48:00.018582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:04.683 [2024-07-24 20:48:00.018595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.018602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.683 [2024-07-24 20:48:00.018609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.683 [2024-07-24 20:48:00.018619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.683 [2024-07-24 20:48:00.018641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.683 [2024-07-24 20:48:00.018770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.683 [2024-07-24 20:48:00.018785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.683 [2024-07-24 20:48:00.018792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.018798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.684 [2024-07-24 20:48:00.018807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:04.684 [2024-07-24 20:48:00.018823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.018832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.018838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.018849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.684 [2024-07-24 20:48:00.018869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.684 [2024-07-24 20:48:00.018994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.684 [2024-07-24 20:48:00.019009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.684 [2024-07-24 20:48:00.019016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.684 [2024-07-24 20:48:00.019030] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:04.684 [2024-07-24 20:48:00.019039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.019052] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:04.684 [2024-07-24 20:48:00.019067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.019083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.019101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.684 [2024-07-24 20:48:00.019122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.684 [2024-07-24 20:48:00.019290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.684 [2024-07-24 20:48:00.019307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.684 [2024-07-24 20:48:00.019314] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019321] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=4096, cccid=0 00:20:04.684 [2024-07-24 20:48:00.019329] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c23c0) on tqpair(0x2462540): expected_datao=0, payload_size=4096 00:20:04.684 [2024-07-24 20:48:00.019337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019348] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019356] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.684 [2024-07-24 20:48:00.019410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.684 [2024-07-24 20:48:00.019417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.684 [2024-07-24 20:48:00.019435] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:04.684 [2024-07-24 20:48:00.019444] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:04.684 [2024-07-24 20:48:00.019452] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:04.684 [2024-07-24 20:48:00.019459] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:04.684 [2024-07-24 20:48:00.019467] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:04.684 [2024-07-24 20:48:00.019475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.019490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.019512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.019537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.684 [2024-07-24 20:48:00.019560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.684 [2024-07-24 20:48:00.019708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.684 [2024-07-24 20:48:00.019720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.684 [2024-07-24 20:48:00.019727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.684 [2024-07-24 20:48:00.019744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.019768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.684 [2024-07-24 20:48:00.019778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.019805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.684 [2024-07-24 20:48:00.019817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.019839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.684 [2024-07-24 20:48:00.019849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.019871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.684 [2024-07-24 20:48:00.019880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.019899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.019913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.019920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.019930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.684 [2024-07-24 20:48:00.019952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c23c0, cid 0, qid 0 00:20:04.684 [2024-07-24 20:48:00.019963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2540, cid 1, qid 0 00:20:04.684 [2024-07-24 20:48:00.019972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c26c0, cid 2, qid 0 00:20:04.684 [2024-07-24 20:48:00.019980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.684 [2024-07-24 20:48:00.019987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c29c0, cid 4, qid 0 00:20:04.684 [2024-07-24 20:48:00.020182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.684 [2024-07-24 20:48:00.020194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.684 [2024-07-24 20:48:00.020200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.020207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c29c0) on tqpair=0x2462540 00:20:04.684 [2024-07-24 20:48:00.020215] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:04.684 [2024-07-24 20:48:00.020223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.020250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.020264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.020275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.020282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.020289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.020299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.684 [2024-07-24 20:48:00.020325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c29c0, cid 4, qid 0 00:20:04.684 [2024-07-24 20:48:00.020457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.684 [2024-07-24 20:48:00.020472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.684 [2024-07-24 20:48:00.020478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.020485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c29c0) on tqpair=0x2462540 00:20:04.684 [2024-07-24 20:48:00.020563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.020583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:04.684 [2024-07-24 20:48:00.020597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.684 [2024-07-24 20:48:00.020605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462540) 00:20:04.684 [2024-07-24 20:48:00.020616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.685 [2024-07-24 20:48:00.020637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c29c0, cid 4, qid 0 00:20:04.685 [2024-07-24 20:48:00.020798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.685 [2024-07-24 20:48:00.020810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.685 [2024-07-24 20:48:00.020817] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.020823] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=4096, cccid=4 00:20:04.685 [2024-07-24 20:48:00.020831] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c29c0) on tqpair(0x2462540): expected_datao=0, payload_size=4096 00:20:04.685 [2024-07-24 20:48:00.020838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.020848] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.020856] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.020898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.020913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.020920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.020926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c29c0) on tqpair=0x2462540 00:20:04.685 [2024-07-24 20:48:00.020943] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:04.685 [2024-07-24 20:48:00.020964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.020982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.020996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.021004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462540) 00:20:04.685 [2024-07-24 20:48:00.021014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.685 [2024-07-24 20:48:00.021035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c29c0, cid 4, qid 0 00:20:04.685 [2024-07-24 20:48:00.021191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.685 [2024-07-24 20:48:00.021206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.685 [2024-07-24 20:48:00.021213] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.021220] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=4096, cccid=4 00:20:04.685 [2024-07-24 20:48:00.021227] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c29c0) on tqpair(0x2462540): expected_datao=0, payload_size=4096 00:20:04.685 [2024-07-24 20:48:00.021239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025264] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025273] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.025307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.025313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c29c0) on tqpair=0x2462540 00:20:04.685 [2024-07-24 20:48:00.025344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462540) 00:20:04.685 [2024-07-24 20:48:00.025397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.685 [2024-07-24 20:48:00.025419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c29c0, cid 4, qid 0 00:20:04.685 [2024-07-24 20:48:00.025581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.685 [2024-07-24 20:48:00.025593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.685 [2024-07-24 20:48:00.025600] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025606] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=4096, cccid=4 00:20:04.685 [2024-07-24 20:48:00.025614] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c29c0) on tqpair(0x2462540): expected_datao=0, payload_size=4096 00:20:04.685 [2024-07-24 20:48:00.025621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025631] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025638] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.025659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.025665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c29c0) on tqpair=0x2462540 00:20:04.685 [2024-07-24 20:48:00.025685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025755] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:04.685 [2024-07-24 20:48:00.025766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:04.685 [2024-07-24 20:48:00.025775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:04.685 [2024-07-24 20:48:00.025799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462540) 00:20:04.685 [2024-07-24 20:48:00.025819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.685 [2024-07-24 20:48:00.025830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.025844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2462540) 00:20:04.685 [2024-07-24 20:48:00.025853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:04.685 [2024-07-24 20:48:00.025879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c29c0, cid 4, qid 0 00:20:04.685 [2024-07-24 20:48:00.025890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2b40, cid 5, qid 0 00:20:04.685 [2024-07-24 20:48:00.026035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.026047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.026053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.026060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c29c0) on tqpair=0x2462540 00:20:04.685 [2024-07-24 20:48:00.026070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.026079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.026085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.026092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2b40) on tqpair=0x2462540 00:20:04.685 [2024-07-24 20:48:00.026107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.026116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2462540) 00:20:04.685 [2024-07-24 20:48:00.026127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.685 [2024-07-24 20:48:00.026147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2b40, cid 5, qid 0 00:20:04.685 [2024-07-24 20:48:00.026296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.026310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.026317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.026323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2b40) on tqpair=0x2462540 00:20:04.685 [2024-07-24 20:48:00.026339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.026348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2462540) 00:20:04.685 [2024-07-24 20:48:00.026359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.685 [2024-07-24 20:48:00.026379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2b40, cid 5, qid 0 00:20:04.685 [2024-07-24 20:48:00.026480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.026495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.026502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.026508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2b40) on tqpair=0x2462540 00:20:04.685 [2024-07-24 20:48:00.026528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.685 [2024-07-24 20:48:00.026538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2462540) 00:20:04.685 [2024-07-24 20:48:00.026549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.685 [2024-07-24 20:48:00.026569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2b40, cid 5, qid 0 00:20:04.685 [2024-07-24 20:48:00.026664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.685 [2024-07-24 20:48:00.026679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.685 [2024-07-24 20:48:00.026685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.026692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2b40) on tqpair=0x2462540 00:20:04.686 [2024-07-24 20:48:00.026717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.026728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2462540) 00:20:04.686 [2024-07-24 20:48:00.026739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.686 [2024-07-24 20:48:00.026751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.026759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2462540) 00:20:04.686 [2024-07-24 20:48:00.026768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.686 [2024-07-24 20:48:00.026780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.026788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2462540) 00:20:04.686 [2024-07-24 20:48:00.026797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.686 [2024-07-24 20:48:00.026810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.026818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2462540) 00:20:04.686 [2024-07-24 20:48:00.026827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.686 [2024-07-24 20:48:00.026849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2b40, cid 5, qid 0 00:20:04.686 [2024-07-24 20:48:00.026860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c29c0, cid 4, qid 0 00:20:04.686 [2024-07-24 20:48:00.026867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2cc0, cid 6, qid 0 00:20:04.686 [2024-07-24 20:48:00.026875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2e40, cid 7, qid 0 00:20:04.686 [2024-07-24 20:48:00.027103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.686 [2024-07-24 20:48:00.027119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.686 [2024-07-24 20:48:00.027126] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027132] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=8192, cccid=5 00:20:04.686 [2024-07-24 20:48:00.027139] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c2b40) on tqpair(0x2462540): expected_datao=0, payload_size=8192 00:20:04.686 [2024-07-24 20:48:00.027147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027200] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027210] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.686 [2024-07-24 20:48:00.027228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.686 [2024-07-24 20:48:00.027239] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027255] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=512, cccid=4 00:20:04.686 [2024-07-24 20:48:00.027263] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c29c0) on tqpair(0x2462540): expected_datao=0, payload_size=512 00:20:04.686 [2024-07-24 20:48:00.027270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027280] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027287] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.686 [2024-07-24 20:48:00.027304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.686 [2024-07-24 20:48:00.027310] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027316] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=512, cccid=6 00:20:04.686 [2024-07-24 20:48:00.027324] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c2cc0) on tqpair(0x2462540): expected_datao=0, payload_size=512 00:20:04.686 [2024-07-24 20:48:00.027331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027340] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027346] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:04.686 [2024-07-24 20:48:00.027363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:04.686 [2024-07-24 20:48:00.027369] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027375] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2462540): datao=0, datal=4096, cccid=7 00:20:04.686 [2024-07-24 20:48:00.027383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c2e40) on tqpair(0x2462540): expected_datao=0, payload_size=4096 00:20:04.686 [2024-07-24 20:48:00.027390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027400] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027407] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.686 [2024-07-24 20:48:00.027428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.686 [2024-07-24 20:48:00.027435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2b40) on tqpair=0x2462540 00:20:04.686 [2024-07-24 20:48:00.027461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.686 [2024-07-24 20:48:00.027472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.686 [2024-07-24 20:48:00.027479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c29c0) on tqpair=0x2462540 00:20:04.686 [2024-07-24 20:48:00.027509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.686 [2024-07-24 20:48:00.027520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.686 [2024-07-24 20:48:00.027526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2cc0) on tqpair=0x2462540 00:20:04.686 [2024-07-24 20:48:00.027543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.686 [2024-07-24 20:48:00.027552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.686 [2024-07-24 20:48:00.027559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.686 [2024-07-24 20:48:00.027565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2e40) on tqpair=0x2462540 00:20:04.686 ===================================================== 00:20:04.686 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.686 ===================================================== 00:20:04.686 Controller Capabilities/Features 00:20:04.686 ================================ 00:20:04.686 Vendor ID: 8086 00:20:04.686 Subsystem Vendor ID: 8086 00:20:04.686 Serial Number: SPDK00000000000001 00:20:04.686 Model Number: SPDK bdev Controller 00:20:04.686 Firmware Version: 24.09 00:20:04.686 Recommended Arb Burst: 6 00:20:04.686 IEEE OUI Identifier: e4 d2 5c 00:20:04.686 Multi-path I/O 00:20:04.686 May have multiple subsystem ports: Yes 00:20:04.686 May have multiple controllers: Yes 00:20:04.686 Associated with SR-IOV VF: No 00:20:04.686 Max Data Transfer Size: 131072 00:20:04.686 Max Number of Namespaces: 32 00:20:04.686 Max Number of I/O Queues: 127 00:20:04.686 NVMe Specification Version (VS): 1.3 00:20:04.686 NVMe Specification Version (Identify): 1.3 00:20:04.686 Maximum Queue Entries: 128 00:20:04.686 Contiguous Queues Required: Yes 00:20:04.686 Arbitration Mechanisms Supported 00:20:04.686 Weighted Round Robin: Not Supported 00:20:04.686 Vendor Specific: Not Supported 00:20:04.686 Reset Timeout: 15000 ms 00:20:04.686 Doorbell Stride: 4 bytes 00:20:04.686 NVM Subsystem Reset: Not Supported 00:20:04.686 Command Sets Supported 00:20:04.686 NVM Command Set: Supported 00:20:04.686 Boot Partition: Not Supported 00:20:04.686 Memory Page Size Minimum: 4096 bytes 00:20:04.686 Memory Page Size Maximum: 4096 bytes 00:20:04.686 Persistent Memory Region: Not Supported 00:20:04.686 Optional Asynchronous Events Supported 00:20:04.686 Namespace Attribute Notices: Supported 00:20:04.686 Firmware Activation Notices: Not Supported 00:20:04.686 ANA Change Notices: Not Supported 00:20:04.686 PLE Aggregate Log Change Notices: Not Supported 00:20:04.686 LBA Status Info Alert Notices: Not Supported 00:20:04.686 EGE Aggregate Log Change Notices: Not Supported 00:20:04.686 Normal NVM Subsystem Shutdown event: Not Supported 00:20:04.686 Zone Descriptor Change Notices: Not Supported 00:20:04.686 Discovery Log Change Notices: Not Supported 00:20:04.686 Controller Attributes 00:20:04.686 128-bit Host Identifier: Supported 00:20:04.686 Non-Operational Permissive Mode: Not Supported 00:20:04.686 NVM Sets: Not Supported 00:20:04.686 Read Recovery Levels: Not Supported 00:20:04.686 Endurance Groups: Not Supported 00:20:04.686 Predictable Latency Mode: Not Supported 00:20:04.686 Traffic Based Keep ALive: Not Supported 00:20:04.686 Namespace Granularity: Not Supported 00:20:04.686 SQ Associations: Not Supported 00:20:04.686 UUID List: Not Supported 00:20:04.686 Multi-Domain Subsystem: Not Supported 00:20:04.686 Fixed Capacity Management: Not Supported 00:20:04.686 Variable Capacity Management: Not Supported 00:20:04.686 Delete Endurance Group: Not Supported 00:20:04.686 Delete NVM Set: Not Supported 00:20:04.686 Extended LBA Formats Supported: Not Supported 00:20:04.686 Flexible Data Placement Supported: Not Supported 00:20:04.686 00:20:04.686 Controller Memory Buffer Support 00:20:04.686 ================================ 00:20:04.687 Supported: No 00:20:04.687 00:20:04.687 Persistent Memory Region Support 00:20:04.687 ================================ 00:20:04.687 Supported: No 00:20:04.687 00:20:04.687 Admin Command Set Attributes 00:20:04.687 ============================ 00:20:04.687 Security Send/Receive: Not Supported 00:20:04.687 Format NVM: Not Supported 00:20:04.687 Firmware Activate/Download: Not Supported 00:20:04.687 Namespace Management: Not Supported 00:20:04.687 Device Self-Test: Not Supported 00:20:04.687 Directives: Not Supported 00:20:04.687 NVMe-MI: Not Supported 00:20:04.687 Virtualization Management: Not Supported 00:20:04.687 Doorbell Buffer Config: Not Supported 00:20:04.687 Get LBA Status Capability: Not Supported 00:20:04.687 Command & Feature Lockdown Capability: Not Supported 00:20:04.687 Abort Command Limit: 4 00:20:04.687 Async Event Request Limit: 4 00:20:04.687 Number of Firmware Slots: N/A 00:20:04.687 Firmware Slot 1 Read-Only: N/A 00:20:04.687 Firmware Activation Without Reset: N/A 00:20:04.687 Multiple Update Detection Support: N/A 00:20:04.687 Firmware Update Granularity: No Information Provided 00:20:04.687 Per-Namespace SMART Log: No 00:20:04.687 Asymmetric Namespace Access Log Page: Not Supported 00:20:04.687 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:04.687 Command Effects Log Page: Supported 00:20:04.687 Get Log Page Extended Data: Supported 00:20:04.687 Telemetry Log Pages: Not Supported 00:20:04.687 Persistent Event Log Pages: Not Supported 00:20:04.687 Supported Log Pages Log Page: May Support 00:20:04.687 Commands Supported & Effects Log Page: Not Supported 00:20:04.687 Feature Identifiers & Effects Log Page:May Support 00:20:04.687 NVMe-MI Commands & Effects Log Page: May Support 00:20:04.687 Data Area 4 for Telemetry Log: Not Supported 00:20:04.687 Error Log Page Entries Supported: 128 00:20:04.687 Keep Alive: Supported 00:20:04.687 Keep Alive Granularity: 10000 ms 00:20:04.687 00:20:04.687 NVM Command Set Attributes 00:20:04.687 ========================== 00:20:04.687 Submission Queue Entry Size 00:20:04.687 Max: 64 00:20:04.687 Min: 64 00:20:04.687 Completion Queue Entry Size 00:20:04.687 Max: 16 00:20:04.687 Min: 16 00:20:04.687 Number of Namespaces: 32 00:20:04.687 Compare Command: Supported 00:20:04.687 Write Uncorrectable Command: Not Supported 00:20:04.687 Dataset Management Command: Supported 00:20:04.687 Write Zeroes Command: Supported 00:20:04.687 Set Features Save Field: Not Supported 00:20:04.687 Reservations: Supported 00:20:04.687 Timestamp: Not Supported 00:20:04.687 Copy: Supported 00:20:04.687 Volatile Write Cache: Present 00:20:04.687 Atomic Write Unit (Normal): 1 00:20:04.687 Atomic Write Unit (PFail): 1 00:20:04.687 Atomic Compare & Write Unit: 1 00:20:04.687 Fused Compare & Write: Supported 00:20:04.687 Scatter-Gather List 00:20:04.687 SGL Command Set: Supported 00:20:04.687 SGL Keyed: Supported 00:20:04.687 SGL Bit Bucket Descriptor: Not Supported 00:20:04.687 SGL Metadata Pointer: Not Supported 00:20:04.687 Oversized SGL: Not Supported 00:20:04.687 SGL Metadata Address: Not Supported 00:20:04.687 SGL Offset: Supported 00:20:04.687 Transport SGL Data Block: Not Supported 00:20:04.687 Replay Protected Memory Block: Not Supported 00:20:04.687 00:20:04.687 Firmware Slot Information 00:20:04.687 ========================= 00:20:04.687 Active slot: 1 00:20:04.687 Slot 1 Firmware Revision: 24.09 00:20:04.687 00:20:04.687 00:20:04.687 Commands Supported and Effects 00:20:04.687 ============================== 00:20:04.687 Admin Commands 00:20:04.687 -------------- 00:20:04.687 Get Log Page (02h): Supported 00:20:04.687 Identify (06h): Supported 00:20:04.687 Abort (08h): Supported 00:20:04.687 Set Features (09h): Supported 00:20:04.687 Get Features (0Ah): Supported 00:20:04.687 Asynchronous Event Request (0Ch): Supported 00:20:04.687 Keep Alive (18h): Supported 00:20:04.687 I/O Commands 00:20:04.687 ------------ 00:20:04.687 Flush (00h): Supported LBA-Change 00:20:04.687 Write (01h): Supported LBA-Change 00:20:04.687 Read (02h): Supported 00:20:04.687 Compare (05h): Supported 00:20:04.687 Write Zeroes (08h): Supported LBA-Change 00:20:04.687 Dataset Management (09h): Supported LBA-Change 00:20:04.687 Copy (19h): Supported LBA-Change 00:20:04.687 00:20:04.687 Error Log 00:20:04.687 ========= 00:20:04.687 00:20:04.687 Arbitration 00:20:04.687 =========== 00:20:04.687 Arbitration Burst: 1 00:20:04.687 00:20:04.687 Power Management 00:20:04.687 ================ 00:20:04.687 Number of Power States: 1 00:20:04.687 Current Power State: Power State #0 00:20:04.687 Power State #0: 00:20:04.687 Max Power: 0.00 W 00:20:04.687 Non-Operational State: Operational 00:20:04.687 Entry Latency: Not Reported 00:20:04.687 Exit Latency: Not Reported 00:20:04.687 Relative Read Throughput: 0 00:20:04.687 Relative Read Latency: 0 00:20:04.687 Relative Write Throughput: 0 00:20:04.687 Relative Write Latency: 0 00:20:04.687 Idle Power: Not Reported 00:20:04.687 Active Power: Not Reported 00:20:04.687 Non-Operational Permissive Mode: Not Supported 00:20:04.687 00:20:04.687 Health Information 00:20:04.687 ================== 00:20:04.687 Critical Warnings: 00:20:04.687 Available Spare Space: OK 00:20:04.687 Temperature: OK 00:20:04.687 Device Reliability: OK 00:20:04.687 Read Only: No 00:20:04.687 Volatile Memory Backup: OK 00:20:04.687 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:04.687 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:04.687 Available Spare: 0% 00:20:04.687 Available Spare Threshold: 0% 00:20:04.687 Life Percentage Used:[2024-07-24 20:48:00.027696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.687 [2024-07-24 20:48:00.027708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2462540) 00:20:04.687 [2024-07-24 20:48:00.027719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.687 [2024-07-24 20:48:00.027743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2e40, cid 7, qid 0 00:20:04.687 [2024-07-24 20:48:00.027887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.687 [2024-07-24 20:48:00.027902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.687 [2024-07-24 20:48:00.027909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.687 [2024-07-24 20:48:00.027916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2e40) on tqpair=0x2462540 00:20:04.687 [2024-07-24 20:48:00.027964] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:04.687 [2024-07-24 20:48:00.027989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c23c0) on tqpair=0x2462540 00:20:04.687 [2024-07-24 20:48:00.028000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.687 [2024-07-24 20:48:00.028009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2540) on tqpair=0x2462540 00:20:04.687 [2024-07-24 20:48:00.028017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.687 [2024-07-24 20:48:00.028025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c26c0) on tqpair=0x2462540 00:20:04.687 [2024-07-24 20:48:00.028032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.687 [2024-07-24 20:48:00.028040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.687 [2024-07-24 20:48:00.028048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:04.688 [2024-07-24 20:48:00.028061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.688 [2024-07-24 20:48:00.028086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.688 [2024-07-24 20:48:00.028110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.688 [2024-07-24 20:48:00.028267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.688 [2024-07-24 20:48:00.028283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.688 [2024-07-24 20:48:00.028290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.688 [2024-07-24 20:48:00.028308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.688 [2024-07-24 20:48:00.028332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.688 [2024-07-24 20:48:00.028359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.688 [2024-07-24 20:48:00.028483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.688 [2024-07-24 20:48:00.028498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.688 [2024-07-24 20:48:00.028505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.688 [2024-07-24 20:48:00.028524] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:04.688 [2024-07-24 20:48:00.028533] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:04.688 [2024-07-24 20:48:00.028549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.688 [2024-07-24 20:48:00.028574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.688 [2024-07-24 20:48:00.028595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.688 [2024-07-24 20:48:00.028746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.688 [2024-07-24 20:48:00.028761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.688 [2024-07-24 20:48:00.028767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.688 [2024-07-24 20:48:00.028790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.688 [2024-07-24 20:48:00.028816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.688 [2024-07-24 20:48:00.028837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.688 [2024-07-24 20:48:00.028942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.688 [2024-07-24 20:48:00.028957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.688 [2024-07-24 20:48:00.028964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.688 [2024-07-24 20:48:00.028987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.028996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.029002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.688 [2024-07-24 20:48:00.029013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.688 [2024-07-24 20:48:00.029033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.688 [2024-07-24 20:48:00.029147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.688 [2024-07-24 20:48:00.029162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.688 [2024-07-24 20:48:00.029168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.029175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.688 [2024-07-24 20:48:00.029191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.029200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.029207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.688 [2024-07-24 20:48:00.029217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.688 [2024-07-24 20:48:00.029237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.688 [2024-07-24 20:48:00.033271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.688 [2024-07-24 20:48:00.033283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.688 [2024-07-24 20:48:00.033294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.033301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.688 [2024-07-24 20:48:00.033318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.033328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.033335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2462540) 00:20:04.688 [2024-07-24 20:48:00.033345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:04.688 [2024-07-24 20:48:00.033367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c2840, cid 3, qid 0 00:20:04.688 [2024-07-24 20:48:00.033517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:04.688 [2024-07-24 20:48:00.033532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:04.688 [2024-07-24 20:48:00.033539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:04.688 [2024-07-24 20:48:00.033545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c2840) on tqpair=0x2462540 00:20:04.688 [2024-07-24 20:48:00.033558] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:04.688 0% 00:20:04.688 Data Units Read: 0 00:20:04.688 Data Units Written: 0 00:20:04.688 Host Read Commands: 0 00:20:04.688 Host Write Commands: 0 00:20:04.688 Controller Busy Time: 0 minutes 00:20:04.688 Power Cycles: 0 00:20:04.688 Power On Hours: 0 hours 00:20:04.688 Unsafe Shutdowns: 0 00:20:04.688 Unrecoverable Media Errors: 0 00:20:04.688 Lifetime Error Log Entries: 0 00:20:04.688 Warning Temperature Time: 0 minutes 00:20:04.688 Critical Temperature Time: 0 minutes 00:20:04.688 00:20:04.688 Number of Queues 00:20:04.688 ================ 00:20:04.688 Number of I/O Submission Queues: 127 00:20:04.688 Number of I/O Completion Queues: 127 00:20:04.688 00:20:04.688 Active Namespaces 00:20:04.688 ================= 00:20:04.688 Namespace ID:1 00:20:04.688 Error Recovery Timeout: Unlimited 00:20:04.688 Command Set Identifier: NVM (00h) 00:20:04.688 Deallocate: Supported 00:20:04.688 Deallocated/Unwritten Error: Not Supported 00:20:04.688 Deallocated Read Value: Unknown 00:20:04.688 Deallocate in Write Zeroes: Not Supported 00:20:04.688 Deallocated Guard Field: 0xFFFF 00:20:04.688 Flush: Supported 00:20:04.688 Reservation: Supported 00:20:04.688 Namespace Sharing Capabilities: Multiple Controllers 00:20:04.688 Size (in LBAs): 131072 (0GiB) 00:20:04.688 Capacity (in LBAs): 131072 (0GiB) 00:20:04.688 Utilization (in LBAs): 131072 (0GiB) 00:20:04.688 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:04.688 EUI64: ABCDEF0123456789 00:20:04.688 UUID: b22e3b46-1405-47ae-b7cb-23e25a41ac96 00:20:04.688 Thin Provisioning: Not Supported 00:20:04.688 Per-NS Atomic Units: Yes 00:20:04.688 Atomic Boundary Size (Normal): 0 00:20:04.688 Atomic Boundary Size (PFail): 0 00:20:04.688 Atomic Boundary Offset: 0 00:20:04.688 Maximum Single Source Range Length: 65535 00:20:04.688 Maximum Copy Length: 65535 00:20:04.688 Maximum Source Range Count: 1 00:20:04.688 NGUID/EUI64 Never Reused: No 00:20:04.688 Namespace Write Protected: No 00:20:04.688 Number of LBA Formats: 1 00:20:04.688 Current LBA Format: LBA Format #00 00:20:04.688 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:04.688 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:04.688 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.689 rmmod nvme_tcp 00:20:04.689 rmmod nvme_fabrics 00:20:04.689 rmmod nvme_keyring 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1642847 ']' 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1642847 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1642847 ']' 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1642847 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1642847 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1642847' 00:20:04.689 killing process with pid 1642847 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1642847 00:20:04.689 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1642847 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.947 20:48:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.499 00:20:07.499 real 0m5.371s 00:20:07.499 user 0m4.371s 00:20:07.499 sys 0m1.803s 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:07.499 ************************************ 00:20:07.499 END TEST nvmf_identify 00:20:07.499 ************************************ 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.499 ************************************ 00:20:07.499 START TEST nvmf_perf 00:20:07.499 ************************************ 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:07.499 * Looking for test storage... 00:20:07.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.499 20:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:09.399 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:09.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:09.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:09.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:09.399 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:09.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:20:09.400 00:20:09.400 --- 10.0.0.2 ping statistics --- 00:20:09.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.400 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:20:09.400 00:20:09.400 --- 10.0.0.1 ping statistics --- 00:20:09.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.400 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1645018 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1645018 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1645018 ']' 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.400 20:48:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:09.400 [2024-07-24 20:48:04.682982] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:20:09.400 [2024-07-24 20:48:04.683066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.400 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.400 [2024-07-24 20:48:04.755452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.400 [2024-07-24 20:48:04.866845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.400 [2024-07-24 20:48:04.866905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.400 [2024-07-24 20:48:04.866933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.400 [2024-07-24 20:48:04.866944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.400 [2024-07-24 20:48:04.866954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.400 [2024-07-24 20:48:04.867040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.400 [2024-07-24 20:48:04.867106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.400 [2024-07-24 20:48:04.867139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.400 [2024-07-24 20:48:04.867141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:10.331 20:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:13.610 20:48:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:13.610 20:48:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:13.610 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:13.610 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:13.868 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:13.868 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:13.868 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:13.868 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:13.868 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.126 [2024-07-24 20:48:09.601343] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.126 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.383 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:14.383 20:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.641 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:14.641 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:14.899 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.157 [2024-07-24 20:48:10.600942] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.157 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:15.414 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:15.415 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:15.415 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:15.415 20:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:16.788 Initializing NVMe Controllers 00:20:16.788 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:16.788 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:16.788 Initialization complete. Launching workers. 00:20:16.788 ======================================================== 00:20:16.788 Latency(us) 00:20:16.788 Device Information : IOPS MiB/s Average min max 00:20:16.788 PCIE (0000:88:00.0) NSID 1 from core 0: 85873.90 335.44 372.04 34.04 4362.24 00:20:16.788 ======================================================== 00:20:16.788 Total : 85873.90 335.44 372.04 34.04 4362.24 00:20:16.788 00:20:16.788 20:48:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:16.788 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.768 Initializing NVMe Controllers 00:20:17.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.768 Initialization complete. Launching workers. 00:20:17.768 ======================================================== 00:20:17.768 Latency(us) 00:20:17.768 Device Information : IOPS MiB/s Average min max 00:20:17.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.00 0.39 9976.46 160.17 45454.31 00:20:17.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19044.67 7932.07 47928.05 00:20:17.768 ======================================================== 00:20:17.768 Total : 155.00 0.61 13135.71 160.17 47928.05 00:20:17.768 00:20:18.026 20:48:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:18.026 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.398 Initializing NVMe Controllers 00:20:19.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:19.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:19.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:19.398 Initialization complete. Launching workers. 00:20:19.398 ======================================================== 00:20:19.398 Latency(us) 00:20:19.398 Device Information : IOPS MiB/s Average min max 00:20:19.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8492.18 33.17 3768.83 517.90 9516.28 00:20:19.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3665.13 14.32 8730.70 5705.20 17053.18 00:20:19.398 ======================================================== 00:20:19.398 Total : 12157.30 47.49 5264.71 517.90 17053.18 00:20:19.398 00:20:19.398 20:48:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:19.398 20:48:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:19.398 20:48:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.398 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.924 Initializing NVMe Controllers 00:20:21.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.924 Controller IO queue size 128, less than required. 00:20:21.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.924 Controller IO queue size 128, less than required. 00:20:21.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:21.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:21.924 Initialization complete. Launching workers. 00:20:21.924 ======================================================== 00:20:21.924 Latency(us) 00:20:21.924 Device Information : IOPS MiB/s Average min max 00:20:21.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1321.48 330.37 99247.85 60364.56 169932.62 00:20:21.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 556.49 139.12 237128.12 94661.80 352930.81 00:20:21.924 ======================================================== 00:20:21.924 Total : 1877.97 469.49 140105.34 60364.56 352930.81 00:20:21.924 00:20:21.924 20:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:21.924 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.182 No valid NVMe controllers or AIO or URING devices found 00:20:22.182 Initializing NVMe Controllers 00:20:22.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.182 Controller IO queue size 128, less than required. 00:20:22.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:22.182 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:22.182 Controller IO queue size 128, less than required. 00:20:22.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:22.182 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:22.182 WARNING: Some requested NVMe devices were skipped 00:20:22.182 20:48:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:22.182 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.709 Initializing NVMe Controllers 00:20:24.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.709 Controller IO queue size 128, less than required. 00:20:24.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.709 Controller IO queue size 128, less than required. 00:20:24.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.709 Initialization complete. Launching workers. 00:20:24.709 00:20:24.709 ==================== 00:20:24.709 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:24.709 TCP transport: 00:20:24.709 polls: 13288 00:20:24.709 idle_polls: 6160 00:20:24.709 sock_completions: 7128 00:20:24.709 nvme_completions: 5189 00:20:24.709 submitted_requests: 7822 00:20:24.709 queued_requests: 1 00:20:24.709 00:20:24.709 ==================== 00:20:24.709 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:24.709 TCP transport: 00:20:24.709 polls: 16095 00:20:24.709 idle_polls: 8814 00:20:24.709 sock_completions: 7281 00:20:24.709 nvme_completions: 5985 00:20:24.709 submitted_requests: 9064 00:20:24.709 queued_requests: 1 00:20:24.709 ======================================================== 00:20:24.709 Latency(us) 00:20:24.709 Device Information : IOPS MiB/s Average min max 00:20:24.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1296.85 324.21 101502.59 59770.83 195939.36 00:20:24.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1495.83 373.96 86547.18 45699.54 117170.13 00:20:24.709 ======================================================== 00:20:24.709 Total : 2792.68 698.17 93492.10 45699.54 195939.36 00:20:24.709 00:20:24.709 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:24.709 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:24.967 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.968 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.968 rmmod nvme_tcp 00:20:24.968 rmmod nvme_fabrics 00:20:25.226 rmmod nvme_keyring 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1645018 ']' 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1645018 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1645018 ']' 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1645018 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1645018 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1645018' 00:20:25.226 killing process with pid 1645018 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1645018 00:20:25.226 20:48:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1645018 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.123 20:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:29.023 00:20:29.023 real 0m21.765s 00:20:29.023 user 1m8.639s 00:20:29.023 sys 0m5.041s 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:29.023 ************************************ 00:20:29.023 END TEST nvmf_perf 00:20:29.023 ************************************ 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.023 ************************************ 00:20:29.023 START TEST nvmf_fio_host 00:20:29.023 ************************************ 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:29.023 * Looking for test storage... 00:20:29.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.023 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.024 20:48:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:30.926 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:30.926 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:30.926 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:30.926 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:30.926 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:30.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:30.927 00:20:30.927 --- 10.0.0.2 ping statistics --- 00:20:30.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.927 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:30.927 00:20:30.927 --- 10.0.0.1 ping statistics --- 00:20:30.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.927 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1649508 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1649508 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1649508 ']' 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.927 20:48:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.927 [2024-07-24 20:48:26.396330] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:20:30.927 [2024-07-24 20:48:26.396425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.927 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.927 [2024-07-24 20:48:26.462097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.186 [2024-07-24 20:48:26.571674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.186 [2024-07-24 20:48:26.571726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.186 [2024-07-24 20:48:26.571751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.186 [2024-07-24 20:48:26.571776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.186 [2024-07-24 20:48:26.571785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.186 [2024-07-24 20:48:26.571930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.186 [2024-07-24 20:48:26.571995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.186 [2024-07-24 20:48:26.572071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.186 [2024-07-24 20:48:26.572074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.120 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.120 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:32.120 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:32.120 [2024-07-24 20:48:27.637026] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.120 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:32.120 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.120 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.120 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:32.686 Malloc1 00:20:32.686 20:48:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.944 20:48:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:33.201 20:48:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.201 [2024-07-24 20:48:28.765390] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.459 20:48:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.459 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:33.459 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:33.459 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:33.459 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:33.460 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:33.717 20:48:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:33.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:33.717 fio-3.35 00:20:33.717 Starting 1 thread 00:20:33.717 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.242 00:20:36.242 test: (groupid=0, jobs=1): err= 0: pid=1649997: Wed Jul 24 20:48:31 2024 00:20:36.242 read: IOPS=9121, BW=35.6MiB/s (37.4MB/s)(71.5MiB/2007msec) 00:20:36.242 slat (nsec): min=1908, max=175283, avg=2411.17, stdev=1984.19 00:20:36.242 clat (usec): min=2300, max=13047, avg=7701.96, stdev=596.79 00:20:36.242 lat (usec): min=2330, max=13050, avg=7704.37, stdev=596.67 00:20:36.242 clat percentiles (usec): 00:20:36.242 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:20:36.242 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:20:36.242 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:20:36.242 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11863], 99.95th=[12125], 00:20:36.242 | 99.99th=[13042] 00:20:36.242 bw ( KiB/s): min=35344, max=37256, per=99.96%, avg=36470.00, stdev=805.30, samples=4 00:20:36.242 iops : min= 8836, max= 9314, avg=9117.50, stdev=201.32, samples=4 00:20:36.242 write: IOPS=9132, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2007msec); 0 zone resets 00:20:36.242 slat (usec): min=2, max=141, avg= 2.53, stdev= 1.38 00:20:36.242 clat (usec): min=1445, max=12270, avg=6225.43, stdev=510.91 00:20:36.242 lat (usec): min=1454, max=12273, avg=6227.96, stdev=510.87 00:20:36.242 clat percentiles (usec): 00:20:36.242 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:20:36.242 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6325], 00:20:36.242 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:20:36.242 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 9765], 99.95th=[11600], 00:20:36.242 | 99.99th=[12256] 00:20:36.242 bw ( KiB/s): min=36104, max=36736, per=100.00%, avg=36546.00, stdev=296.28, samples=4 00:20:36.242 iops : min= 9026, max= 9184, avg=9136.50, stdev=74.07, samples=4 00:20:36.242 lat (msec) : 2=0.02%, 4=0.11%, 10=99.74%, 20=0.13% 00:20:36.242 cpu : usr=61.71%, sys=34.65%, ctx=73, majf=0, minf=38 00:20:36.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:36.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:36.242 issued rwts: total=18306,18329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:36.242 00:20:36.242 Run status group 0 (all jobs): 00:20:36.242 READ: bw=35.6MiB/s (37.4MB/s), 35.6MiB/s-35.6MiB/s (37.4MB/s-37.4MB/s), io=71.5MiB (75.0MB), run=2007-2007msec 00:20:36.242 WRITE: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2007-2007msec 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:36.243 20:48:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:36.498 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:36.498 fio-3.35 00:20:36.498 Starting 1 thread 00:20:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.048 00:20:39.048 test: (groupid=0, jobs=1): err= 0: pid=1650329: Wed Jul 24 20:48:34 2024 00:20:39.048 read: IOPS=7597, BW=119MiB/s (124MB/s)(239MiB/2012msec) 00:20:39.048 slat (nsec): min=2779, max=90637, avg=3615.48, stdev=1649.63 00:20:39.048 clat (usec): min=2599, max=53342, avg=9677.34, stdev=4223.95 00:20:39.048 lat (usec): min=2603, max=53346, avg=9680.95, stdev=4223.96 00:20:39.048 clat percentiles (usec): 00:20:39.048 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7570], 00:20:39.048 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:20:39.048 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12387], 95.00th=[13566], 00:20:39.048 | 99.00th=[17695], 99.50th=[47973], 99.90th=[52167], 99.95th=[53216], 00:20:39.048 | 99.99th=[53216] 00:20:39.048 bw ( KiB/s): min=56352, max=73312, per=51.05%, avg=62056.00, stdev=7689.26, samples=4 00:20:39.048 iops : min= 3522, max= 4582, avg=3878.50, stdev=480.58, samples=4 00:20:39.048 write: IOPS=4486, BW=70.1MiB/s (73.5MB/s)(127MiB/1814msec); 0 zone resets 00:20:39.048 slat (usec): min=30, max=137, avg=33.27, stdev= 5.14 00:20:39.048 clat (usec): min=6623, max=28101, avg=12662.92, stdev=2857.77 00:20:39.048 lat (usec): min=6654, max=28133, avg=12696.18, stdev=2858.10 00:20:39.048 clat percentiles (usec): 00:20:39.048 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:20:39.048 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12256], 60.00th=[13042], 00:20:39.048 | 70.00th=[13960], 80.00th=[14877], 90.00th=[16581], 95.00th=[17957], 00:20:39.048 | 99.00th=[20317], 99.50th=[21103], 99.90th=[27657], 99.95th=[27919], 00:20:39.048 | 99.99th=[28181] 00:20:39.048 bw ( KiB/s): min=59744, max=76416, per=89.91%, avg=64536.00, stdev=7962.60, samples=4 00:20:39.048 iops : min= 3734, max= 4776, avg=4033.50, stdev=497.66, samples=4 00:20:39.048 lat (msec) : 4=0.11%, 10=49.62%, 20=49.31%, 50=0.75%, 100=0.21% 00:20:39.048 cpu : usr=72.40%, sys=24.66%, ctx=47, majf=0, minf=60 00:20:39.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:39.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:39.048 issued rwts: total=15286,8138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:39.048 00:20:39.048 Run status group 0 (all jobs): 00:20:39.048 READ: bw=119MiB/s (124MB/s), 119MiB/s-119MiB/s (124MB/s-124MB/s), io=239MiB (250MB), run=2012-2012msec 00:20:39.048 WRITE: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=127MiB (133MB), run=1814-1814msec 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.048 rmmod nvme_tcp 00:20:39.048 rmmod nvme_fabrics 00:20:39.048 rmmod nvme_keyring 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1649508 ']' 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1649508 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1649508 ']' 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1649508 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1649508 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:39.048 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1649508' 00:20:39.049 killing process with pid 1649508 00:20:39.049 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1649508 00:20:39.049 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1649508 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.307 20:48:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.832 00:20:41.832 real 0m12.557s 00:20:41.832 user 0m38.235s 00:20:41.832 sys 0m4.051s 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.832 ************************************ 00:20:41.832 END TEST nvmf_fio_host 00:20:41.832 ************************************ 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.832 ************************************ 00:20:41.832 START TEST nvmf_failover 00:20:41.832 ************************************ 00:20:41.832 20:48:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:41.832 * Looking for test storage... 00:20:41.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.832 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.833 20:48:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.733 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:43.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:43.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:43.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:43.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.734 20:48:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:43.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:20:43.734 00:20:43.734 --- 10.0.0.2 ping statistics --- 00:20:43.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.734 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:20:43.734 00:20:43.734 --- 10.0.0.1 ping statistics --- 00:20:43.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.734 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1652528 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1652528 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1652528 ']' 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.734 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.734 [2024-07-24 20:48:39.132672] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:20:43.734 [2024-07-24 20:48:39.132757] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.734 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.734 [2024-07-24 20:48:39.195963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:43.993 [2024-07-24 20:48:39.303413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.993 [2024-07-24 20:48:39.303460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.993 [2024-07-24 20:48:39.303475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.993 [2024-07-24 20:48:39.303487] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.993 [2024-07-24 20:48:39.303497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.993 [2024-07-24 20:48:39.303576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.993 [2024-07-24 20:48:39.303605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.993 [2024-07-24 20:48:39.303608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.993 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.993 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:43.993 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.993 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.993 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.993 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.993 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:44.251 [2024-07-24 20:48:39.724404] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.251 20:48:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:44.509 Malloc0 00:20:44.767 20:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.025 20:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:45.282 20:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.282 [2024-07-24 20:48:40.839053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.541 20:48:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:45.541 [2024-07-24 20:48:41.083754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:45.541 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:45.798 [2024-07-24 20:48:41.332579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1652815 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1652815 /var/tmp/bdevperf.sock 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1652815 ']' 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.798 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:46.364 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.364 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:46.364 20:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:46.621 NVMe0n1 00:20:46.879 20:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:47.136 00:20:47.136 20:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1652958 00:20:47.136 20:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:47.136 20:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:48.510 20:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.510 [2024-07-24 20:48:43.962329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.510 [2024-07-24 20:48:43.962988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 [2024-07-24 20:48:43.963353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e9f40 is same with the state(5) to be set 00:20:48.511 20:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:51.789 20:48:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:52.049 00:20:52.049 20:48:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:52.307 [2024-07-24 20:48:47.743937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.743992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 [2024-07-24 20:48:47.744654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ead10 is same with the state(5) to be set 00:20:52.307 20:48:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:55.586 20:48:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.586 [2024-07-24 20:48:51.038330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.586 20:48:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:56.518 20:48:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:57.116 20:48:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1652958 00:21:02.374 0 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1652815 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1652815 ']' 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1652815 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1652815 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1652815' 00:21:02.374 killing process with pid 1652815 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1652815 00:21:02.374 20:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1652815 00:21:02.636 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:02.636 [2024-07-24 20:48:41.392295] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:21:02.636 [2024-07-24 20:48:41.392378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652815 ] 00:21:02.636 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.636 [2024-07-24 20:48:41.470575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.636 [2024-07-24 20:48:41.613175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.636 Running I/O for 15 seconds... 00:21:02.636 [2024-07-24 20:48:43.964711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.964985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.964999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.965013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.965028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.965042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.965057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.965070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.965092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.965106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.965121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.965134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.965149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.965178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.965194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.636 [2024-07-24 20:48:43.965208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.636 [2024-07-24 20:48:43.965224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.637 [2024-07-24 20:48:43.965238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.637 [2024-07-24 20:48:43.965279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.965980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.965996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.637 [2024-07-24 20:48:43.966382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.637 [2024-07-24 20:48:43.966396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.638 [2024-07-24 20:48:43.966658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.966970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.966984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.638 [2024-07-24 20:48:43.967545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.638 [2024-07-24 20:48:43.967560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.967977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.967991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.968005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.968033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.639 [2024-07-24 20:48:43.968061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81560 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81576 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81632 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81640 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.639 [2024-07-24 20:48:43.968685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.639 [2024-07-24 20:48:43.968696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:21:02.639 [2024-07-24 20:48:43.968708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.639 [2024-07-24 20:48:43.968720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.640 [2024-07-24 20:48:43.968731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.640 [2024-07-24 20:48:43.968741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 PRP1 0x0 PRP2 0x0 00:21:02.640 [2024-07-24 20:48:43.968753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:43.968766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.640 [2024-07-24 20:48:43.968777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.640 [2024-07-24 20:48:43.968788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 PRP1 0x0 PRP2 0x0 00:21:02.640 [2024-07-24 20:48:43.968800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:43.968813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.640 [2024-07-24 20:48:43.968823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.640 [2024-07-24 20:48:43.968837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81672 len:8 PRP1 0x0 PRP2 0x0 00:21:02.640 [2024-07-24 20:48:43.968851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:43.968908] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2157c10 was disconnected and freed. reset controller. 00:21:02.640 [2024-07-24 20:48:43.968926] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:02.640 [2024-07-24 20:48:43.968961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.640 [2024-07-24 20:48:43.968980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:43.968995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.640 [2024-07-24 20:48:43.969008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:43.969022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.640 [2024-07-24 20:48:43.969035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:43.969048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.640 [2024-07-24 20:48:43.969061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:43.969074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.640 [2024-07-24 20:48:43.972356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.640 [2024-07-24 20:48:43.972394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213a0f0 (9): Bad file descriptor 00:21:02.640 [2024-07-24 20:48:44.000701] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:02.640 [2024-07-24 20:48:47.744919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.744961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.744988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.640 [2024-07-24 20:48:47.745730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.640 [2024-07-24 20:48:47.745745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.745984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.745999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.641 [2024-07-24 20:48:47.746379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.641 [2024-07-24 20:48:47.746698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.641 [2024-07-24 20:48:47.746711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.746984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.746999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.642 [2024-07-24 20:48:47.747797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.642 [2024-07-24 20:48:47.747849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:21:02.642 [2024-07-24 20:48:47.747862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.642 [2024-07-24 20:48:47.747885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.642 [2024-07-24 20:48:47.747897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.642 [2024-07-24 20:48:47.747909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:21:02.642 [2024-07-24 20:48:47.747921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.747934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.747945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.747956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.747969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.747981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.747992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86528 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86536 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86544 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86552 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86560 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86568 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86576 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86584 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86592 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86600 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86608 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86616 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86624 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86632 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86640 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86648 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86656 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.748958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.748969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.748980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86664 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.748992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.749005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.643 [2024-07-24 20:48:47.749016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.643 [2024-07-24 20:48:47.749027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86672 len:8 PRP1 0x0 PRP2 0x0 00:21:02.643 [2024-07-24 20:48:47.749039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.643 [2024-07-24 20:48:47.749052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86680 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86688 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86696 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86704 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86712 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86720 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86080 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.644 [2024-07-24 20:48:47.749401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.644 [2024-07-24 20:48:47.749412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86088 len:8 PRP1 0x0 PRP2 0x0 00:21:02.644 [2024-07-24 20:48:47.749424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749480] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2168d40 was disconnected and freed. reset controller. 00:21:02.644 [2024-07-24 20:48:47.749498] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:02.644 [2024-07-24 20:48:47.749529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.644 [2024-07-24 20:48:47.749547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.644 [2024-07-24 20:48:47.749575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.644 [2024-07-24 20:48:47.749602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.644 [2024-07-24 20:48:47.749627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:47.749640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.644 [2024-07-24 20:48:47.749677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213a0f0 (9): Bad file descriptor 00:21:02.644 [2024-07-24 20:48:47.752910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.644 [2024-07-24 20:48:47.826640] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:02.644 [2024-07-24 20:48:52.341072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.644 [2024-07-24 20:48:52.341726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.644 [2024-07-24 20:48:52.341740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.341975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.341990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.645 [2024-07-24 20:48:52.342302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.645 [2024-07-24 20:48:52.342335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.645 [2024-07-24 20:48:52.342365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.645 [2024-07-24 20:48:52.342380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.646 [2024-07-24 20:48:52.342888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.646 [2024-07-24 20:48:52.342904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.342917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.342933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.342947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.342961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.342975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.342990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.647 [2024-07-24 20:48:52.343712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.647 [2024-07-24 20:48:52.343766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24040 len:8 PRP1 0x0 PRP2 0x0 00:21:02.647 [2024-07-24 20:48:52.343779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.647 [2024-07-24 20:48:52.343808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.647 [2024-07-24 20:48:52.343819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24048 len:8 PRP1 0x0 PRP2 0x0 00:21:02.647 [2024-07-24 20:48:52.343832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.647 [2024-07-24 20:48:52.343861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.647 [2024-07-24 20:48:52.343872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24056 len:8 PRP1 0x0 PRP2 0x0 00:21:02.647 [2024-07-24 20:48:52.343884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.647 [2024-07-24 20:48:52.343908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.647 [2024-07-24 20:48:52.343918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:8 PRP1 0x0 PRP2 0x0 00:21:02.647 [2024-07-24 20:48:52.343931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.647 [2024-07-24 20:48:52.343954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.647 [2024-07-24 20:48:52.343965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24072 len:8 PRP1 0x0 PRP2 0x0 00:21:02.647 [2024-07-24 20:48:52.343977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.343990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.647 [2024-07-24 20:48:52.344001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.647 [2024-07-24 20:48:52.344012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24080 len:8 PRP1 0x0 PRP2 0x0 00:21:02.647 [2024-07-24 20:48:52.344024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.344038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.647 [2024-07-24 20:48:52.344049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.647 [2024-07-24 20:48:52.344060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24088 len:8 PRP1 0x0 PRP2 0x0 00:21:02.647 [2024-07-24 20:48:52.344073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.647 [2024-07-24 20:48:52.344085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.647 [2024-07-24 20:48:52.344096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24104 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24112 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24120 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24136 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24144 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24152 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24168 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24176 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24184 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24200 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24208 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24216 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24232 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.344956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.344969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24240 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.344982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.344995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.345006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.345017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24248 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.345029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.345042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.345053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.345064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.345077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.345089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.345100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.345111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24264 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.345124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.345136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.345147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.345158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24272 len:8 PRP1 0x0 PRP2 0x0 00:21:02.648 [2024-07-24 20:48:52.345171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.648 [2024-07-24 20:48:52.345184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.648 [2024-07-24 20:48:52.345195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.648 [2024-07-24 20:48:52.345206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24280 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24296 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24304 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24312 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24328 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24336 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24344 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23640 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:02.649 [2024-07-24 20:48:52.345679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:02.649 [2024-07-24 20:48:52.345690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:8 PRP1 0x0 PRP2 0x0 00:21:02.649 [2024-07-24 20:48:52.345705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345771] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x216ab40 was disconnected and freed. reset controller. 00:21:02.649 [2024-07-24 20:48:52.345788] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:02.649 [2024-07-24 20:48:52.345823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.649 [2024-07-24 20:48:52.345842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.649 [2024-07-24 20:48:52.345870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.649 [2024-07-24 20:48:52.345897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.649 [2024-07-24 20:48:52.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.649 [2024-07-24 20:48:52.345945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.649 [2024-07-24 20:48:52.346000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213a0f0 (9): Bad file descriptor 00:21:02.649 [2024-07-24 20:48:52.349236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.649 [2024-07-24 20:48:52.459000] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:02.649 00:21:02.649 Latency(us) 00:21:02.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.649 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:02.649 Verification LBA range: start 0x0 length 0x4000 00:21:02.649 NVMe0n1 : 15.02 8441.57 32.97 546.38 0.00 14214.06 794.93 16893.72 00:21:02.649 =================================================================================================================== 00:21:02.649 Total : 8441.57 32.97 546.38 0.00 14214.06 794.93 16893.72 00:21:02.649 Received shutdown signal, test time was about 15.000000 seconds 00:21:02.649 00:21:02.649 Latency(us) 00:21:02.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.649 =================================================================================================================== 00:21:02.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1654797 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1654797 /var/tmp/bdevperf.sock 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1654797 ']' 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.649 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:02.907 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.907 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:02.907 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:03.164 [2024-07-24 20:48:58.672078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:03.164 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:03.421 [2024-07-24 20:48:58.932780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:03.421 20:48:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:03.986 NVMe0n1 00:21:03.986 20:48:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.244 00:21:04.244 20:48:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.501 00:21:04.501 20:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.501 20:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:04.758 20:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.016 20:49:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:08.294 20:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:08.294 20:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:08.294 20:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1655468 00:21:08.294 20:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.294 20:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1655468 00:21:09.667 0 00:21:09.667 20:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:09.667 [2024-07-24 20:48:58.170991] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:21:09.667 [2024-07-24 20:48:58.171078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654797 ] 00:21:09.667 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.667 [2024-07-24 20:48:58.231313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.667 [2024-07-24 20:48:58.337100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.667 [2024-07-24 20:49:00.505353] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:09.667 [2024-07-24 20:49:00.505445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.667 [2024-07-24 20:49:00.505469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.667 [2024-07-24 20:49:00.505487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.667 [2024-07-24 20:49:00.505501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.667 [2024-07-24 20:49:00.505515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.667 [2024-07-24 20:49:00.505528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.667 [2024-07-24 20:49:00.505542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.667 [2024-07-24 20:49:00.505555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.667 [2024-07-24 20:49:00.505569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:09.667 [2024-07-24 20:49:00.505616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:09.667 [2024-07-24 20:49:00.505648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24640f0 (9): Bad file descriptor 00:21:09.667 [2024-07-24 20:49:00.515708] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:09.667 Running I/O for 1 seconds... 00:21:09.667 00:21:09.667 Latency(us) 00:21:09.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.667 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:09.667 Verification LBA range: start 0x0 length 0x4000 00:21:09.667 NVMe0n1 : 1.01 7874.63 30.76 0.00 0.00 16161.88 3276.80 14951.92 00:21:09.667 =================================================================================================================== 00:21:09.667 Total : 7874.63 30.76 0.00 0.00 16161.88 3276.80 14951.92 00:21:09.667 20:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.667 20:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:09.667 20:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:09.924 20:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.924 20:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:10.181 20:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.439 20:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:13.715 20:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:13.715 20:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1654797 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1654797 ']' 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1654797 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1654797 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1654797' 00:21:13.715 killing process with pid 1654797 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1654797 00:21:13.715 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1654797 00:21:13.971 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:13.971 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.227 rmmod nvme_tcp 00:21:14.227 rmmod nvme_fabrics 00:21:14.227 rmmod nvme_keyring 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1652528 ']' 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1652528 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1652528 ']' 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1652528 00:21:14.227 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:14.484 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.484 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1652528 00:21:14.484 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:14.484 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:14.484 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1652528' 00:21:14.484 killing process with pid 1652528 00:21:14.484 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1652528 00:21:14.484 20:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1652528 00:21:14.742 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.742 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.742 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.742 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.742 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.743 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.743 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.743 20:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.641 20:49:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.641 00:21:16.641 real 0m35.203s 00:21:16.641 user 2m4.426s 00:21:16.641 sys 0m5.785s 00:21:16.641 20:49:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.641 20:49:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:16.641 ************************************ 00:21:16.641 END TEST nvmf_failover 00:21:16.641 ************************************ 00:21:16.641 20:49:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:16.641 20:49:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:16.641 20:49:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.641 20:49:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.920 ************************************ 00:21:16.920 START TEST nvmf_host_discovery 00:21:16.920 ************************************ 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:16.920 * Looking for test storage... 00:21:16.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.920 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.921 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:18.827 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:18.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:18.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:18.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:18.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:18.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:21:18.828 00:21:18.828 --- 10.0.0.2 ping statistics --- 00:21:18.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.828 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:21:18.828 00:21:18.828 --- 10.0.0.1 ping statistics --- 00:21:18.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.828 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1658068 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1658068 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1658068 ']' 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.828 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.828 [2024-07-24 20:49:14.391599] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:21:18.828 [2024-07-24 20:49:14.391710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.086 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.086 [2024-07-24 20:49:14.457360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.086 [2024-07-24 20:49:14.568399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.086 [2024-07-24 20:49:14.568452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.086 [2024-07-24 20:49:14.568480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.086 [2024-07-24 20:49:14.568492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.086 [2024-07-24 20:49:14.568502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.086 [2024-07-24 20:49:14.568546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.344 [2024-07-24 20:49:14.715311] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.344 [2024-07-24 20:49:14.723559] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.344 null0 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.344 null1 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1658210 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1658210 /tmp/host.sock 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1658210 ']' 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:19.344 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.344 20:49:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.344 [2024-07-24 20:49:14.803286] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:21:19.344 [2024-07-24 20:49:14.803379] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658210 ] 00:21:19.344 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.344 [2024-07-24 20:49:14.869354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.602 [2024-07-24 20:49:14.983020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.602 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 [2024-07-24 20:49:15.413325] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:19.860 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.118 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:21:20.119 20:49:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:20.684 [2024-07-24 20:49:16.173062] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:20.684 [2024-07-24 20:49:16.173095] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:20.684 [2024-07-24 20:49:16.173121] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:20.941 [2024-07-24 20:49:16.301564] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:21.199 [2024-07-24 20:49:16.527954] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:21.199 [2024-07-24 20:49:16.527987] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.199 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:21.458 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.459 [2024-07-24 20:49:16.861429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.459 [2024-07-24 20:49:16.862089] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:21.459 [2024-07-24 20:49:16.862135] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.459 [2024-07-24 20:49:16.989966] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:21.459 20:49:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:21.717 [2024-07-24 20:49:17.088656] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:21.717 [2024-07-24 20:49:17.088682] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:21.717 [2024-07-24 20:49:17.088693] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:22.651 20:49:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.651 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.651 [2024-07-24 20:49:18.081999] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:22.652 [2024-07-24 20:49:18.082043] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:22.652 [2024-07-24 20:49:18.088755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.652 [2024-07-24 20:49:18.088787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.652 [2024-07-24 20:49:18.088819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.652 [2024-07-24 20:49:18.088834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.652 [2024-07-24 20:49:18.088849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.652 [2024-07-24 20:49:18.088862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.652 [2024-07-24 20:49:18.088876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:22.652 [2024-07-24 20:49:18.088890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.652 [2024-07-24 20:49:18.088904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172bc20 is same with the state(5) to be set 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.652 [2024-07-24 20:49:18.098750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172bc20 (9): Bad file descriptor 00:21:22.652 [2024-07-24 20:49:18.108795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:22.652 [2024-07-24 20:49:18.109046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.652 [2024-07-24 20:49:18.109084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172bc20 with addr=10.0.0.2, port=4420 00:21:22.652 [2024-07-24 20:49:18.109102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172bc20 is same with the state(5) to be set 00:21:22.652 [2024-07-24 20:49:18.109125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172bc20 (9): Bad file descriptor 00:21:22.652 [2024-07-24 20:49:18.109158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:22.652 [2024-07-24 20:49:18.109176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:22.652 [2024-07-24 20:49:18.109192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:22.652 [2024-07-24 20:49:18.109212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.652 [2024-07-24 20:49:18.118882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:22.652 [2024-07-24 20:49:18.119099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.652 [2024-07-24 20:49:18.119126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172bc20 with addr=10.0.0.2, port=4420 00:21:22.652 [2024-07-24 20:49:18.119142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172bc20 is same with the state(5) to be set 00:21:22.652 [2024-07-24 20:49:18.119164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172bc20 (9): Bad file descriptor 00:21:22.652 [2024-07-24 20:49:18.119195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:22.652 [2024-07-24 20:49:18.119212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:22.652 [2024-07-24 20:49:18.119226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:22.652 [2024-07-24 20:49:18.119253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.652 [2024-07-24 20:49:18.128968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:22.652 [2024-07-24 20:49:18.129151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:22.652 [2024-07-24 20:49:18.129184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172bc20 with addr=10.0.0.2, port=4420 00:21:22.652 [2024-07-24 20:49:18.129203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172bc20 is same with the state(5) to be set 00:21:22.652 [2024-07-24 20:49:18.129227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172bc20 (9): Bad file descriptor 00:21:22.652 [2024-07-24 20:49:18.129272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:22.652 [2024-07-24 20:49:18.129306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:22.652 [2024-07-24 20:49:18.129320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:22.652 [2024-07-24 20:49:18.129345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:22.652 [2024-07-24 20:49:18.139053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:22.652 [2024-07-24 20:49:18.139287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.652 [2024-07-24 20:49:18.139324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172bc20 with addr=10.0.0.2, port=4420 00:21:22.652 [2024-07-24 20:49:18.139340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172bc20 is same with the state(5) to be set 00:21:22.652 [2024-07-24 20:49:18.139362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172bc20 (9): Bad file descriptor 00:21:22.652 [2024-07-24 20:49:18.139382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:22.652 [2024-07-24 20:49:18.139396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:22.652 [2024-07-24 20:49:18.139410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:22.652 [2024-07-24 20:49:18.139428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.652 [2024-07-24 20:49:18.149130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:22.652 [2024-07-24 20:49:18.149307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.652 [2024-07-24 20:49:18.149335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172bc20 with addr=10.0.0.2, port=4420 00:21:22.652 [2024-07-24 20:49:18.149351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172bc20 is same with the state(5) to be set 00:21:22.652 [2024-07-24 20:49:18.149372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172bc20 (9): Bad file descriptor 00:21:22.652 [2024-07-24 20:49:18.149392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:22.652 [2024-07-24 20:49:18.149405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:22.652 [2024-07-24 20:49:18.149419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:22.652 [2024-07-24 20:49:18.149437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.652 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.652 [2024-07-24 20:49:18.159215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:22.652 [2024-07-24 20:49:18.159428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.652 [2024-07-24 20:49:18.159456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172bc20 with addr=10.0.0.2, port=4420 00:21:22.652 [2024-07-24 20:49:18.159471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172bc20 is same with the state(5) to be set 00:21:22.652 [2024-07-24 20:49:18.159493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172bc20 (9): Bad file descriptor 00:21:22.652 [2024-07-24 20:49:18.159518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:22.652 [2024-07-24 20:49:18.159533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:22.652 [2024-07-24 20:49:18.159546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:22.653 [2024-07-24 20:49:18.159564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.653 [2024-07-24 20:49:18.168859] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:22.653 [2024-07-24 20:49:18.168893] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:22.653 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.911 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.285 [2024-07-24 20:49:19.452375] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:24.285 [2024-07-24 20:49:19.452413] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:24.285 [2024-07-24 20:49:19.452436] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:24.285 [2024-07-24 20:49:19.538741] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:24.285 [2024-07-24 20:49:19.808898] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:24.285 [2024-07-24 20:49:19.808960] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.285 request: 00:21:24.285 { 00:21:24.285 "name": "nvme", 00:21:24.285 "trtype": "tcp", 00:21:24.285 "traddr": "10.0.0.2", 00:21:24.285 "adrfam": "ipv4", 00:21:24.285 "trsvcid": "8009", 00:21:24.285 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:24.285 "wait_for_attach": true, 00:21:24.285 "method": "bdev_nvme_start_discovery", 00:21:24.285 "req_id": 1 00:21:24.285 } 00:21:24.285 Got JSON-RPC error response 00:21:24.285 response: 00:21:24.285 { 00:21:24.285 "code": -17, 00:21:24.285 "message": "File exists" 00:21:24.285 } 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:24.285 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.544 request: 00:21:24.544 { 00:21:24.544 "name": "nvme_second", 00:21:24.544 "trtype": "tcp", 00:21:24.544 "traddr": "10.0.0.2", 00:21:24.544 "adrfam": "ipv4", 00:21:24.544 "trsvcid": "8009", 00:21:24.544 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:24.544 "wait_for_attach": true, 00:21:24.544 "method": "bdev_nvme_start_discovery", 00:21:24.544 "req_id": 1 00:21:24.544 } 00:21:24.544 Got JSON-RPC error response 00:21:24.544 response: 00:21:24.544 { 00:21:24.544 "code": -17, 00:21:24.544 "message": "File exists" 00:21:24.544 } 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:24.544 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.544 20:49:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:25.478 [2024-07-24 20:49:21.028565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.478 [2024-07-24 20:49:21.028628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172f030 with addr=10.0.0.2, port=8010 00:21:25.478 [2024-07-24 20:49:21.028659] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:25.478 [2024-07-24 20:49:21.028675] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:25.478 [2024-07-24 20:49:21.028688] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:26.850 [2024-07-24 20:49:22.030983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.850 [2024-07-24 20:49:22.031055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172f030 with addr=10.0.0.2, port=8010 00:21:26.850 [2024-07-24 20:49:22.031088] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:26.850 [2024-07-24 20:49:22.031105] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:26.850 [2024-07-24 20:49:22.031119] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:27.783 [2024-07-24 20:49:23.033138] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:27.783 request: 00:21:27.783 { 00:21:27.783 "name": "nvme_second", 00:21:27.783 "trtype": "tcp", 00:21:27.783 "traddr": "10.0.0.2", 00:21:27.783 "adrfam": "ipv4", 00:21:27.783 "trsvcid": "8010", 00:21:27.783 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:27.783 "wait_for_attach": false, 00:21:27.783 "attach_timeout_ms": 3000, 00:21:27.783 "method": "bdev_nvme_start_discovery", 00:21:27.783 "req_id": 1 00:21:27.783 } 00:21:27.783 Got JSON-RPC error response 00:21:27.783 response: 00:21:27.783 { 00:21:27.783 "code": -110, 00:21:27.783 "message": "Connection timed out" 00:21:27.783 } 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1658210 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.783 rmmod nvme_tcp 00:21:27.783 rmmod nvme_fabrics 00:21:27.783 rmmod nvme_keyring 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1658068 ']' 00:21:27.783 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1658068 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1658068 ']' 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1658068 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658068 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658068' 00:21:27.784 killing process with pid 1658068 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1658068 00:21:27.784 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1658068 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.042 20:49:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.940 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:29.940 00:21:29.940 real 0m13.268s 00:21:29.940 user 0m19.335s 00:21:29.940 sys 0m2.799s 00:21:29.940 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.940 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.940 ************************************ 00:21:29.940 END TEST nvmf_host_discovery 00:21:29.940 ************************************ 00:21:29.940 20:49:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:29.940 20:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:29.940 20:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:29.940 20:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.199 ************************************ 00:21:30.199 START TEST nvmf_host_multipath_status 00:21:30.199 ************************************ 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:30.199 * Looking for test storage... 00:21:30.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.199 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.200 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.200 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.200 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.200 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.200 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:32.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:32.104 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:32.104 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:32.104 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.104 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.105 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:32.364 00:21:32.364 --- 10.0.0.2 ping statistics --- 00:21:32.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.364 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:21:32.364 00:21:32.364 --- 10.0.0.1 ping statistics --- 00:21:32.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.364 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1661242 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1661242 00:21:32.364 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1661242 ']' 00:21:32.365 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.365 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:32.365 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.365 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:32.365 20:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:32.365 [2024-07-24 20:49:27.834056] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:21:32.365 [2024-07-24 20:49:27.834143] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.365 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.365 [2024-07-24 20:49:27.901972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:32.623 [2024-07-24 20:49:28.020361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.623 [2024-07-24 20:49:28.020447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.623 [2024-07-24 20:49:28.020464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.623 [2024-07-24 20:49:28.020477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.623 [2024-07-24 20:49:28.020488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.623 [2024-07-24 20:49:28.020573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.623 [2024-07-24 20:49:28.020580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1661242 00:21:33.555 20:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:33.555 [2024-07-24 20:49:29.030125] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.555 20:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:33.822 Malloc0 00:21:33.822 20:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:34.114 20:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:34.372 20:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.630 [2024-07-24 20:49:30.040987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.630 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:34.887 [2024-07-24 20:49:30.281482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1661541 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1661541 /var/tmp/bdevperf.sock 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1661541 ']' 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.888 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:35.145 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.145 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:35.145 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:35.402 20:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:35.659 Nvme0n1 00:21:35.917 20:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:36.174 Nvme0n1 00:21:36.174 20:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:36.174 20:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:38.698 20:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:38.698 20:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:38.698 20:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:38.698 20:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:39.630 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:39.630 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:39.630 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.630 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:39.888 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.888 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:39.888 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.888 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:40.146 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:40.146 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:40.146 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.146 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:40.404 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.404 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:40.404 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.404 20:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:40.661 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.661 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:40.661 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.661 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:40.919 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.919 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:40.919 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.919 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:41.177 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.177 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:41.177 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:41.435 20:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:41.693 20:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:42.626 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:42.626 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:42.626 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.626 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:42.884 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.884 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:42.884 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.884 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:43.141 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.141 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:43.142 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.142 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:43.399 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.399 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:43.399 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.399 20:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:43.656 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.656 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:43.656 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.656 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:43.914 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.914 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:43.914 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.914 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:44.172 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.172 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:44.172 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:44.429 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:44.686 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:45.619 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:45.619 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:45.619 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.619 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:45.877 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.877 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:45.877 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.877 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:46.135 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:46.135 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:46.135 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.135 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:46.393 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.393 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:46.393 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.393 20:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:46.650 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.650 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:46.650 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.650 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:46.908 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.908 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:46.908 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.908 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:47.166 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.166 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:47.166 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:47.424 20:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:47.680 20:49:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.049 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:49.307 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:49.307 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:49.307 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.307 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:49.565 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.565 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:49.565 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.565 20:49:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:49.823 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.823 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:49.823 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.823 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:50.079 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:50.079 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:50.079 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.079 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:50.336 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:50.336 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:50.336 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:50.594 20:49:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:50.851 20:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:51.781 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:51.781 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:51.781 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.781 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:52.038 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:52.038 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:52.038 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.038 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:52.296 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:52.296 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:52.296 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.296 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:52.553 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.553 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:52.553 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.553 20:49:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:52.810 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.810 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:52.810 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.810 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:53.067 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:53.067 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:53.067 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.067 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:53.324 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:53.324 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:53.324 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:53.581 20:49:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:53.839 20:49:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:54.772 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:54.772 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:54.772 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.772 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:55.029 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:55.029 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:55.029 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.029 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:55.287 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.287 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:55.287 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.287 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:55.545 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.545 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:55.545 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.545 20:49:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:55.802 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.802 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:55.802 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.802 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:56.060 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:56.060 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:56.060 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.060 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:56.318 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.318 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:56.576 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:56.576 20:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:56.834 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:57.091 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:58.023 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:58.023 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:58.024 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.024 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:58.281 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.281 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:58.281 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.281 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:58.538 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.538 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:58.538 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.538 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:58.796 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.796 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:58.796 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.796 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:59.053 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.053 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:59.053 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.053 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:59.311 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.311 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:59.311 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.311 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:59.569 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.569 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:59.569 20:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:59.827 20:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:00.084 20:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:01.017 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:01.017 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:01.017 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.017 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:01.275 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:01.275 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:01.275 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.275 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:01.536 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.536 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:01.536 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.536 20:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.794 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.794 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.794 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.794 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:02.052 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.052 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:02.052 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.052 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:02.310 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.310 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:02.310 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.310 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:02.568 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.568 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:02.568 20:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:02.826 20:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:03.084 20:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:04.018 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:04.018 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:04.018 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.018 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:04.312 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.312 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:04.312 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.312 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.574 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.574 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.574 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.574 20:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:04.832 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.832 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:04.832 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.832 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:05.090 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.090 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:05.090 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.090 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:05.348 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.348 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:05.348 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.348 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:05.606 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.606 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:05.606 20:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:05.864 20:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:06.122 20:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:07.055 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:07.055 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:07.055 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.055 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:07.312 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.312 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:07.312 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.312 20:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:07.570 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.570 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:07.570 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.570 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:07.827 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.827 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:07.827 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.827 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.085 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.085 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.085 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.085 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.343 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.343 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:08.343 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.343 20:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.600 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.600 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1661541 00:22:08.600 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1661541 ']' 00:22:08.600 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1661541 00:22:08.600 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:08.600 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.600 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1661541 00:22:08.601 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:08.601 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:08.601 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1661541' 00:22:08.601 killing process with pid 1661541 00:22:08.601 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1661541 00:22:08.601 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1661541 00:22:08.861 Connection closed with partial response: 00:22:08.861 00:22:08.861 00:22:08.861 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1661541 00:22:08.861 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:08.861 [2024-07-24 20:49:30.337773] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:22:08.861 [2024-07-24 20:49:30.337849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661541 ] 00:22:08.861 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.861 [2024-07-24 20:49:30.395846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.861 [2024-07-24 20:49:30.509569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.861 Running I/O for 90 seconds... 00:22:08.861 [2024-07-24 20:49:45.937808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-24 20:49:45.937871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:08.861 [2024-07-24 20:49:45.937931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-24 20:49:45.937953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:08.861 [2024-07-24 20:49:45.937977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.937994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.938607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.938623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.939750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.939773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.939816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.939833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.939856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.939872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.939894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.939910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.939932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.939948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.939970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.939985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-24 20:49:45.940743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:08.862 [2024-07-24 20:49:45.940845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.862 [2024-07-24 20:49:45.940861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.940884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.940900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.940924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.940940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.940964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.940995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.941972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.941987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-24 20:49:45.942448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:08.863 [2024-07-24 20:49:45.942473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.942967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.942994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.943991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-24 20:49:45.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.944034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-24 20:49:45.944049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.944076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.944092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.944119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.944134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.944161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.944176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.944203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.944218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.944269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.944287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:08.864 [2024-07-24 20:49:45.944315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-24 20:49:45.944331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:49:45.944792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:49:45.944808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:50:01.494336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.494951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.494982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.865 [2024-07-24 20:50:01.495005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.497739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.497798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.497835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.497871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.497906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.497942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.497976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.497997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.498012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.498032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.498047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.498068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-24 20:50:01.498082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:08.865 [2024-07-24 20:50:01.498103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.866 [2024-07-24 20:50:01.498118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:08.866 [2024-07-24 20:50:01.498138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.866 [2024-07-24 20:50:01.498152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:08.866 [2024-07-24 20:50:01.498173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.866 [2024-07-24 20:50:01.498192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:08.866 [2024-07-24 20:50:01.498214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.866 [2024-07-24 20:50:01.498253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:08.866 [2024-07-24 20:50:01.498278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.866 [2024-07-24 20:50:01.498294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:08.866 [2024-07-24 20:50:01.498316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.866 [2024-07-24 20:50:01.498331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:08.866 [2024-07-24 20:50:01.498352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.866 [2024-07-24 20:50:01.498367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:08.866 [2024-07-24 20:50:01.498389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.866 [2024-07-24 20:50:01.498405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.866 Received shutdown signal, test time was about 32.291152 seconds 00:22:08.866 00:22:08.866 Latency(us) 00:22:08.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.866 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:08.866 Verification LBA range: start 0x0 length 0x4000 00:22:08.866 Nvme0n1 : 32.29 7811.78 30.51 0.00 0.00 16338.18 306.44 4026531.84 00:22:08.866 =================================================================================================================== 00:22:08.866 Total : 7811.78 30.51 0.00 0.00 16338.18 306.44 4026531.84 00:22:08.866 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.124 rmmod nvme_tcp 00:22:09.124 rmmod nvme_fabrics 00:22:09.124 rmmod nvme_keyring 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1661242 ']' 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1661242 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1661242 ']' 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1661242 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.124 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1661242 00:22:09.382 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:09.382 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:09.382 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1661242' 00:22:09.382 killing process with pid 1661242 00:22:09.382 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1661242 00:22:09.382 20:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1661242 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.640 20:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.538 00:22:11.538 real 0m41.523s 00:22:11.538 user 2m4.697s 00:22:11.538 sys 0m10.298s 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:11.538 ************************************ 00:22:11.538 END TEST nvmf_host_multipath_status 00:22:11.538 ************************************ 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.538 ************************************ 00:22:11.538 START TEST nvmf_discovery_remove_ifc 00:22:11.538 ************************************ 00:22:11.538 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:11.797 * Looking for test storage... 00:22:11.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.797 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.798 20:50:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.698 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:13.699 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:13.699 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:13.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:13.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.699 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:13.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:22:13.957 00:22:13.957 --- 10.0.0.2 ping statistics --- 00:22:13.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.957 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:13.957 00:22:13.957 --- 10.0.0.1 ping statistics --- 00:22:13.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.957 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1667732 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1667732 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1667732 ']' 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.957 20:50:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.957 [2024-07-24 20:50:09.414595] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:22:13.957 [2024-07-24 20:50:09.414697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.957 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.957 [2024-07-24 20:50:09.484446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.215 [2024-07-24 20:50:09.601074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.215 [2024-07-24 20:50:09.601134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.215 [2024-07-24 20:50:09.601150] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.215 [2024-07-24 20:50:09.601163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.215 [2024-07-24 20:50:09.601175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.215 [2024-07-24 20:50:09.601204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:15.148 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.149 [2024-07-24 20:50:10.393729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.149 [2024-07-24 20:50:10.401910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:15.149 null0 00:22:15.149 [2024-07-24 20:50:10.433858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1667891 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1667891 /tmp/host.sock 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1667891 ']' 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:15.149 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.149 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.149 [2024-07-24 20:50:10.503429] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:22:15.149 [2024-07-24 20:50:10.503511] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667891 ] 00:22:15.149 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.149 [2024-07-24 20:50:10.568655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.149 [2024-07-24 20:50:10.685785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.406 20:50:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:16.340 [2024-07-24 20:50:11.871353] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:16.340 [2024-07-24 20:50:11.871391] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:16.340 [2024-07-24 20:50:11.871413] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:16.598 [2024-07-24 20:50:11.958710] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:16.856 [2024-07-24 20:50:12.185098] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:16.856 [2024-07-24 20:50:12.185167] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:16.856 [2024-07-24 20:50:12.185214] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:16.856 [2024-07-24 20:50:12.185252] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:16.856 [2024-07-24 20:50:12.185318] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:16.856 [2024-07-24 20:50:12.190280] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1caf8e0 was disconnected and freed. delete nvme_qpair. 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:16.856 20:50:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:17.789 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.046 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:18.046 20:50:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:18.979 20:50:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:19.938 20:50:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:21.305 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:21.306 20:50:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.236 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:22.237 20:50:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:22.237 [2024-07-24 20:50:17.625965] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:22.237 [2024-07-24 20:50:17.626030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.237 [2024-07-24 20:50:17.626053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.237 [2024-07-24 20:50:17.626072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.237 [2024-07-24 20:50:17.626087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.237 [2024-07-24 20:50:17.626103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.237 [2024-07-24 20:50:17.626117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.237 [2024-07-24 20:50:17.626133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.237 [2024-07-24 20:50:17.626147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.237 [2024-07-24 20:50:17.626163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.237 [2024-07-24 20:50:17.626177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.237 [2024-07-24 20:50:17.626191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c76320 is same with the state(5) to be set 00:22:22.237 [2024-07-24 20:50:17.635985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c76320 (9): Bad file descriptor 00:22:22.237 [2024-07-24 20:50:17.646031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:23.168 [2024-07-24 20:50:18.654270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:23.168 [2024-07-24 20:50:18.654328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c76320 with addr=10.0.0.2, port=4420 00:22:23.168 [2024-07-24 20:50:18.654349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c76320 is same with the state(5) to be set 00:22:23.168 [2024-07-24 20:50:18.654378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c76320 (9): Bad file descriptor 00:22:23.168 [2024-07-24 20:50:18.654772] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:23.168 [2024-07-24 20:50:18.654813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:23.168 [2024-07-24 20:50:18.654833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:23.168 [2024-07-24 20:50:18.654855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:23.168 [2024-07-24 20:50:18.654879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.168 [2024-07-24 20:50:18.654898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:23.168 20:50:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:24.100 [2024-07-24 20:50:19.657389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:24.100 [2024-07-24 20:50:19.657416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:24.100 [2024-07-24 20:50:19.657444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:24.100 [2024-07-24 20:50:19.657456] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:24.100 [2024-07-24 20:50:19.657474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:24.100 [2024-07-24 20:50:19.657507] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:24.100 [2024-07-24 20:50:19.657556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.100 [2024-07-24 20:50:19.657578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.100 [2024-07-24 20:50:19.657595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.100 [2024-07-24 20:50:19.657609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.100 [2024-07-24 20:50:19.657623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.100 [2024-07-24 20:50:19.657637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.100 [2024-07-24 20:50:19.657652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.100 [2024-07-24 20:50:19.657666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.100 [2024-07-24 20:50:19.657680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.100 [2024-07-24 20:50:19.657694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.100 [2024-07-24 20:50:19.657708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:24.100 [2024-07-24 20:50:19.658026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c75780 (9): Bad file descriptor 00:22:24.100 [2024-07-24 20:50:19.659046] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:24.100 [2024-07-24 20:50:19.659071] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.357 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:24.358 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.358 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.358 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.358 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:24.358 20:50:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:25.314 20:50:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.247 [2024-07-24 20:50:21.718401] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:26.247 [2024-07-24 20:50:21.718433] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:26.247 [2024-07-24 20:50:21.718454] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.247 [2024-07-24 20:50:21.804770] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:26.504 20:50:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.504 [2024-07-24 20:50:22.031248] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:26.504 [2024-07-24 20:50:22.031313] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:26.504 [2024-07-24 20:50:22.031346] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:26.504 [2024-07-24 20:50:22.031368] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:26.504 [2024-07-24 20:50:22.031380] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:26.504 [2024-07-24 20:50:22.036447] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c7c120 was disconnected and freed. delete nvme_qpair. 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1667891 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1667891 ']' 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1667891 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1667891 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1667891' 00:22:27.436 killing process with pid 1667891 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1667891 00:22:27.436 20:50:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1667891 00:22:27.694 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:27.694 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.694 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:27.694 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.694 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:27.694 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.694 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.694 rmmod nvme_tcp 00:22:27.694 rmmod nvme_fabrics 00:22:27.959 rmmod nvme_keyring 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1667732 ']' 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1667732 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1667732 ']' 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1667732 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1667732 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:27.959 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1667732' 00:22:27.960 killing process with pid 1667732 00:22:27.960 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1667732 00:22:27.960 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1667732 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.218 20:50:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.116 20:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.116 00:22:30.116 real 0m18.540s 00:22:30.116 user 0m26.846s 00:22:30.116 sys 0m3.087s 00:22:30.116 20:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.116 20:50:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.116 ************************************ 00:22:30.116 END TEST nvmf_discovery_remove_ifc 00:22:30.116 ************************************ 00:22:30.116 20:50:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:30.116 20:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:30.116 20:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.116 20:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.374 ************************************ 00:22:30.374 START TEST nvmf_identify_kernel_target 00:22:30.374 ************************************ 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:30.374 * Looking for test storage... 00:22:30.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.374 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.375 20:50:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:32.272 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:32.272 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:32.272 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:32.272 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.272 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.273 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.273 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.273 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:32.530 00:22:32.530 --- 10.0.0.2 ping statistics --- 00:22:32.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.530 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:32.530 00:22:32.530 --- 10.0.0.1 ping statistics --- 00:22:32.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.530 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.530 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:32.531 20:50:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:33.465 Waiting for block devices as requested 00:22:33.465 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:33.723 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:33.723 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:33.981 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:33.981 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:33.981 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:33.981 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:33.981 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:34.239 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:34.239 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:34.239 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:34.497 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:34.497 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:34.497 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:34.497 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:34.755 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:34.755 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:34.755 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:34.755 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:34.755 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:35.013 No valid GPT data, bailing 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:35.013 00:22:35.013 Discovery Log Number of Records 2, Generation counter 2 00:22:35.013 =====Discovery Log Entry 0====== 00:22:35.013 trtype: tcp 00:22:35.013 adrfam: ipv4 00:22:35.013 subtype: current discovery subsystem 00:22:35.013 treq: not specified, sq flow control disable supported 00:22:35.013 portid: 1 00:22:35.013 trsvcid: 4420 00:22:35.013 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:35.013 traddr: 10.0.0.1 00:22:35.013 eflags: none 00:22:35.013 sectype: none 00:22:35.013 =====Discovery Log Entry 1====== 00:22:35.013 trtype: tcp 00:22:35.013 adrfam: ipv4 00:22:35.013 subtype: nvme subsystem 00:22:35.013 treq: not specified, sq flow control disable supported 00:22:35.013 portid: 1 00:22:35.013 trsvcid: 4420 00:22:35.013 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:35.013 traddr: 10.0.0.1 00:22:35.013 eflags: none 00:22:35.013 sectype: none 00:22:35.013 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:35.013 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:35.014 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.014 ===================================================== 00:22:35.014 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:35.014 ===================================================== 00:22:35.014 Controller Capabilities/Features 00:22:35.014 ================================ 00:22:35.014 Vendor ID: 0000 00:22:35.014 Subsystem Vendor ID: 0000 00:22:35.014 Serial Number: 29029d9959f6b39f07a1 00:22:35.014 Model Number: Linux 00:22:35.014 Firmware Version: 6.7.0-68 00:22:35.014 Recommended Arb Burst: 0 00:22:35.014 IEEE OUI Identifier: 00 00 00 00:22:35.014 Multi-path I/O 00:22:35.014 May have multiple subsystem ports: No 00:22:35.014 May have multiple controllers: No 00:22:35.014 Associated with SR-IOV VF: No 00:22:35.014 Max Data Transfer Size: Unlimited 00:22:35.014 Max Number of Namespaces: 0 00:22:35.014 Max Number of I/O Queues: 1024 00:22:35.014 NVMe Specification Version (VS): 1.3 00:22:35.014 NVMe Specification Version (Identify): 1.3 00:22:35.014 Maximum Queue Entries: 1024 00:22:35.014 Contiguous Queues Required: No 00:22:35.014 Arbitration Mechanisms Supported 00:22:35.014 Weighted Round Robin: Not Supported 00:22:35.014 Vendor Specific: Not Supported 00:22:35.014 Reset Timeout: 7500 ms 00:22:35.014 Doorbell Stride: 4 bytes 00:22:35.014 NVM Subsystem Reset: Not Supported 00:22:35.014 Command Sets Supported 00:22:35.014 NVM Command Set: Supported 00:22:35.014 Boot Partition: Not Supported 00:22:35.014 Memory Page Size Minimum: 4096 bytes 00:22:35.014 Memory Page Size Maximum: 4096 bytes 00:22:35.014 Persistent Memory Region: Not Supported 00:22:35.014 Optional Asynchronous Events Supported 00:22:35.014 Namespace Attribute Notices: Not Supported 00:22:35.014 Firmware Activation Notices: Not Supported 00:22:35.014 ANA Change Notices: Not Supported 00:22:35.014 PLE Aggregate Log Change Notices: Not Supported 00:22:35.014 LBA Status Info Alert Notices: Not Supported 00:22:35.014 EGE Aggregate Log Change Notices: Not Supported 00:22:35.014 Normal NVM Subsystem Shutdown event: Not Supported 00:22:35.014 Zone Descriptor Change Notices: Not Supported 00:22:35.014 Discovery Log Change Notices: Supported 00:22:35.014 Controller Attributes 00:22:35.014 128-bit Host Identifier: Not Supported 00:22:35.014 Non-Operational Permissive Mode: Not Supported 00:22:35.014 NVM Sets: Not Supported 00:22:35.014 Read Recovery Levels: Not Supported 00:22:35.014 Endurance Groups: Not Supported 00:22:35.014 Predictable Latency Mode: Not Supported 00:22:35.014 Traffic Based Keep ALive: Not Supported 00:22:35.014 Namespace Granularity: Not Supported 00:22:35.014 SQ Associations: Not Supported 00:22:35.014 UUID List: Not Supported 00:22:35.014 Multi-Domain Subsystem: Not Supported 00:22:35.014 Fixed Capacity Management: Not Supported 00:22:35.014 Variable Capacity Management: Not Supported 00:22:35.014 Delete Endurance Group: Not Supported 00:22:35.014 Delete NVM Set: Not Supported 00:22:35.014 Extended LBA Formats Supported: Not Supported 00:22:35.014 Flexible Data Placement Supported: Not Supported 00:22:35.014 00:22:35.014 Controller Memory Buffer Support 00:22:35.014 ================================ 00:22:35.014 Supported: No 00:22:35.014 00:22:35.014 Persistent Memory Region Support 00:22:35.014 ================================ 00:22:35.014 Supported: No 00:22:35.014 00:22:35.014 Admin Command Set Attributes 00:22:35.014 ============================ 00:22:35.014 Security Send/Receive: Not Supported 00:22:35.014 Format NVM: Not Supported 00:22:35.014 Firmware Activate/Download: Not Supported 00:22:35.014 Namespace Management: Not Supported 00:22:35.014 Device Self-Test: Not Supported 00:22:35.014 Directives: Not Supported 00:22:35.014 NVMe-MI: Not Supported 00:22:35.014 Virtualization Management: Not Supported 00:22:35.014 Doorbell Buffer Config: Not Supported 00:22:35.014 Get LBA Status Capability: Not Supported 00:22:35.014 Command & Feature Lockdown Capability: Not Supported 00:22:35.014 Abort Command Limit: 1 00:22:35.014 Async Event Request Limit: 1 00:22:35.014 Number of Firmware Slots: N/A 00:22:35.014 Firmware Slot 1 Read-Only: N/A 00:22:35.014 Firmware Activation Without Reset: N/A 00:22:35.014 Multiple Update Detection Support: N/A 00:22:35.014 Firmware Update Granularity: No Information Provided 00:22:35.014 Per-Namespace SMART Log: No 00:22:35.014 Asymmetric Namespace Access Log Page: Not Supported 00:22:35.014 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:35.014 Command Effects Log Page: Not Supported 00:22:35.014 Get Log Page Extended Data: Supported 00:22:35.014 Telemetry Log Pages: Not Supported 00:22:35.014 Persistent Event Log Pages: Not Supported 00:22:35.014 Supported Log Pages Log Page: May Support 00:22:35.014 Commands Supported & Effects Log Page: Not Supported 00:22:35.014 Feature Identifiers & Effects Log Page:May Support 00:22:35.014 NVMe-MI Commands & Effects Log Page: May Support 00:22:35.014 Data Area 4 for Telemetry Log: Not Supported 00:22:35.014 Error Log Page Entries Supported: 1 00:22:35.014 Keep Alive: Not Supported 00:22:35.014 00:22:35.014 NVM Command Set Attributes 00:22:35.014 ========================== 00:22:35.014 Submission Queue Entry Size 00:22:35.014 Max: 1 00:22:35.014 Min: 1 00:22:35.014 Completion Queue Entry Size 00:22:35.014 Max: 1 00:22:35.014 Min: 1 00:22:35.014 Number of Namespaces: 0 00:22:35.014 Compare Command: Not Supported 00:22:35.014 Write Uncorrectable Command: Not Supported 00:22:35.014 Dataset Management Command: Not Supported 00:22:35.014 Write Zeroes Command: Not Supported 00:22:35.014 Set Features Save Field: Not Supported 00:22:35.014 Reservations: Not Supported 00:22:35.014 Timestamp: Not Supported 00:22:35.014 Copy: Not Supported 00:22:35.014 Volatile Write Cache: Not Present 00:22:35.014 Atomic Write Unit (Normal): 1 00:22:35.014 Atomic Write Unit (PFail): 1 00:22:35.014 Atomic Compare & Write Unit: 1 00:22:35.014 Fused Compare & Write: Not Supported 00:22:35.014 Scatter-Gather List 00:22:35.014 SGL Command Set: Supported 00:22:35.014 SGL Keyed: Not Supported 00:22:35.014 SGL Bit Bucket Descriptor: Not Supported 00:22:35.014 SGL Metadata Pointer: Not Supported 00:22:35.014 Oversized SGL: Not Supported 00:22:35.014 SGL Metadata Address: Not Supported 00:22:35.014 SGL Offset: Supported 00:22:35.014 Transport SGL Data Block: Not Supported 00:22:35.014 Replay Protected Memory Block: Not Supported 00:22:35.014 00:22:35.014 Firmware Slot Information 00:22:35.014 ========================= 00:22:35.014 Active slot: 0 00:22:35.014 00:22:35.014 00:22:35.014 Error Log 00:22:35.014 ========= 00:22:35.014 00:22:35.014 Active Namespaces 00:22:35.014 ================= 00:22:35.014 Discovery Log Page 00:22:35.014 ================== 00:22:35.014 Generation Counter: 2 00:22:35.014 Number of Records: 2 00:22:35.014 Record Format: 0 00:22:35.014 00:22:35.014 Discovery Log Entry 0 00:22:35.014 ---------------------- 00:22:35.014 Transport Type: 3 (TCP) 00:22:35.014 Address Family: 1 (IPv4) 00:22:35.014 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:35.014 Entry Flags: 00:22:35.014 Duplicate Returned Information: 0 00:22:35.014 Explicit Persistent Connection Support for Discovery: 0 00:22:35.015 Transport Requirements: 00:22:35.015 Secure Channel: Not Specified 00:22:35.015 Port ID: 1 (0x0001) 00:22:35.015 Controller ID: 65535 (0xffff) 00:22:35.015 Admin Max SQ Size: 32 00:22:35.015 Transport Service Identifier: 4420 00:22:35.015 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:35.015 Transport Address: 10.0.0.1 00:22:35.015 Discovery Log Entry 1 00:22:35.015 ---------------------- 00:22:35.015 Transport Type: 3 (TCP) 00:22:35.015 Address Family: 1 (IPv4) 00:22:35.015 Subsystem Type: 2 (NVM Subsystem) 00:22:35.015 Entry Flags: 00:22:35.015 Duplicate Returned Information: 0 00:22:35.015 Explicit Persistent Connection Support for Discovery: 0 00:22:35.015 Transport Requirements: 00:22:35.015 Secure Channel: Not Specified 00:22:35.015 Port ID: 1 (0x0001) 00:22:35.015 Controller ID: 65535 (0xffff) 00:22:35.015 Admin Max SQ Size: 32 00:22:35.015 Transport Service Identifier: 4420 00:22:35.015 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:35.015 Transport Address: 10.0.0.1 00:22:35.015 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:35.274 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.274 get_feature(0x01) failed 00:22:35.274 get_feature(0x02) failed 00:22:35.274 get_feature(0x04) failed 00:22:35.274 ===================================================== 00:22:35.274 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:35.274 ===================================================== 00:22:35.274 Controller Capabilities/Features 00:22:35.274 ================================ 00:22:35.274 Vendor ID: 0000 00:22:35.274 Subsystem Vendor ID: 0000 00:22:35.274 Serial Number: 55353145eb296ec1a572 00:22:35.274 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:35.274 Firmware Version: 6.7.0-68 00:22:35.274 Recommended Arb Burst: 6 00:22:35.274 IEEE OUI Identifier: 00 00 00 00:22:35.274 Multi-path I/O 00:22:35.274 May have multiple subsystem ports: Yes 00:22:35.274 May have multiple controllers: Yes 00:22:35.274 Associated with SR-IOV VF: No 00:22:35.274 Max Data Transfer Size: Unlimited 00:22:35.274 Max Number of Namespaces: 1024 00:22:35.274 Max Number of I/O Queues: 128 00:22:35.274 NVMe Specification Version (VS): 1.3 00:22:35.274 NVMe Specification Version (Identify): 1.3 00:22:35.274 Maximum Queue Entries: 1024 00:22:35.274 Contiguous Queues Required: No 00:22:35.274 Arbitration Mechanisms Supported 00:22:35.274 Weighted Round Robin: Not Supported 00:22:35.274 Vendor Specific: Not Supported 00:22:35.274 Reset Timeout: 7500 ms 00:22:35.274 Doorbell Stride: 4 bytes 00:22:35.274 NVM Subsystem Reset: Not Supported 00:22:35.274 Command Sets Supported 00:22:35.274 NVM Command Set: Supported 00:22:35.274 Boot Partition: Not Supported 00:22:35.274 Memory Page Size Minimum: 4096 bytes 00:22:35.274 Memory Page Size Maximum: 4096 bytes 00:22:35.274 Persistent Memory Region: Not Supported 00:22:35.274 Optional Asynchronous Events Supported 00:22:35.274 Namespace Attribute Notices: Supported 00:22:35.274 Firmware Activation Notices: Not Supported 00:22:35.274 ANA Change Notices: Supported 00:22:35.274 PLE Aggregate Log Change Notices: Not Supported 00:22:35.274 LBA Status Info Alert Notices: Not Supported 00:22:35.274 EGE Aggregate Log Change Notices: Not Supported 00:22:35.274 Normal NVM Subsystem Shutdown event: Not Supported 00:22:35.274 Zone Descriptor Change Notices: Not Supported 00:22:35.274 Discovery Log Change Notices: Not Supported 00:22:35.274 Controller Attributes 00:22:35.274 128-bit Host Identifier: Supported 00:22:35.274 Non-Operational Permissive Mode: Not Supported 00:22:35.274 NVM Sets: Not Supported 00:22:35.274 Read Recovery Levels: Not Supported 00:22:35.274 Endurance Groups: Not Supported 00:22:35.274 Predictable Latency Mode: Not Supported 00:22:35.274 Traffic Based Keep ALive: Supported 00:22:35.274 Namespace Granularity: Not Supported 00:22:35.274 SQ Associations: Not Supported 00:22:35.274 UUID List: Not Supported 00:22:35.274 Multi-Domain Subsystem: Not Supported 00:22:35.274 Fixed Capacity Management: Not Supported 00:22:35.274 Variable Capacity Management: Not Supported 00:22:35.274 Delete Endurance Group: Not Supported 00:22:35.274 Delete NVM Set: Not Supported 00:22:35.274 Extended LBA Formats Supported: Not Supported 00:22:35.274 Flexible Data Placement Supported: Not Supported 00:22:35.274 00:22:35.274 Controller Memory Buffer Support 00:22:35.274 ================================ 00:22:35.274 Supported: No 00:22:35.274 00:22:35.274 Persistent Memory Region Support 00:22:35.274 ================================ 00:22:35.274 Supported: No 00:22:35.274 00:22:35.274 Admin Command Set Attributes 00:22:35.274 ============================ 00:22:35.274 Security Send/Receive: Not Supported 00:22:35.274 Format NVM: Not Supported 00:22:35.274 Firmware Activate/Download: Not Supported 00:22:35.274 Namespace Management: Not Supported 00:22:35.274 Device Self-Test: Not Supported 00:22:35.274 Directives: Not Supported 00:22:35.274 NVMe-MI: Not Supported 00:22:35.274 Virtualization Management: Not Supported 00:22:35.274 Doorbell Buffer Config: Not Supported 00:22:35.274 Get LBA Status Capability: Not Supported 00:22:35.274 Command & Feature Lockdown Capability: Not Supported 00:22:35.274 Abort Command Limit: 4 00:22:35.274 Async Event Request Limit: 4 00:22:35.274 Number of Firmware Slots: N/A 00:22:35.274 Firmware Slot 1 Read-Only: N/A 00:22:35.274 Firmware Activation Without Reset: N/A 00:22:35.274 Multiple Update Detection Support: N/A 00:22:35.274 Firmware Update Granularity: No Information Provided 00:22:35.274 Per-Namespace SMART Log: Yes 00:22:35.274 Asymmetric Namespace Access Log Page: Supported 00:22:35.274 ANA Transition Time : 10 sec 00:22:35.274 00:22:35.274 Asymmetric Namespace Access Capabilities 00:22:35.274 ANA Optimized State : Supported 00:22:35.274 ANA Non-Optimized State : Supported 00:22:35.274 ANA Inaccessible State : Supported 00:22:35.274 ANA Persistent Loss State : Supported 00:22:35.274 ANA Change State : Supported 00:22:35.274 ANAGRPID is not changed : No 00:22:35.274 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:35.274 00:22:35.274 ANA Group Identifier Maximum : 128 00:22:35.274 Number of ANA Group Identifiers : 128 00:22:35.274 Max Number of Allowed Namespaces : 1024 00:22:35.274 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:35.274 Command Effects Log Page: Supported 00:22:35.274 Get Log Page Extended Data: Supported 00:22:35.274 Telemetry Log Pages: Not Supported 00:22:35.274 Persistent Event Log Pages: Not Supported 00:22:35.274 Supported Log Pages Log Page: May Support 00:22:35.274 Commands Supported & Effects Log Page: Not Supported 00:22:35.274 Feature Identifiers & Effects Log Page:May Support 00:22:35.274 NVMe-MI Commands & Effects Log Page: May Support 00:22:35.274 Data Area 4 for Telemetry Log: Not Supported 00:22:35.274 Error Log Page Entries Supported: 128 00:22:35.274 Keep Alive: Supported 00:22:35.274 Keep Alive Granularity: 1000 ms 00:22:35.274 00:22:35.274 NVM Command Set Attributes 00:22:35.274 ========================== 00:22:35.274 Submission Queue Entry Size 00:22:35.274 Max: 64 00:22:35.274 Min: 64 00:22:35.274 Completion Queue Entry Size 00:22:35.274 Max: 16 00:22:35.274 Min: 16 00:22:35.274 Number of Namespaces: 1024 00:22:35.274 Compare Command: Not Supported 00:22:35.274 Write Uncorrectable Command: Not Supported 00:22:35.274 Dataset Management Command: Supported 00:22:35.274 Write Zeroes Command: Supported 00:22:35.274 Set Features Save Field: Not Supported 00:22:35.274 Reservations: Not Supported 00:22:35.274 Timestamp: Not Supported 00:22:35.274 Copy: Not Supported 00:22:35.274 Volatile Write Cache: Present 00:22:35.274 Atomic Write Unit (Normal): 1 00:22:35.274 Atomic Write Unit (PFail): 1 00:22:35.274 Atomic Compare & Write Unit: 1 00:22:35.274 Fused Compare & Write: Not Supported 00:22:35.274 Scatter-Gather List 00:22:35.274 SGL Command Set: Supported 00:22:35.274 SGL Keyed: Not Supported 00:22:35.274 SGL Bit Bucket Descriptor: Not Supported 00:22:35.274 SGL Metadata Pointer: Not Supported 00:22:35.274 Oversized SGL: Not Supported 00:22:35.274 SGL Metadata Address: Not Supported 00:22:35.274 SGL Offset: Supported 00:22:35.274 Transport SGL Data Block: Not Supported 00:22:35.274 Replay Protected Memory Block: Not Supported 00:22:35.274 00:22:35.274 Firmware Slot Information 00:22:35.274 ========================= 00:22:35.274 Active slot: 0 00:22:35.274 00:22:35.274 Asymmetric Namespace Access 00:22:35.274 =========================== 00:22:35.274 Change Count : 0 00:22:35.274 Number of ANA Group Descriptors : 1 00:22:35.274 ANA Group Descriptor : 0 00:22:35.274 ANA Group ID : 1 00:22:35.274 Number of NSID Values : 1 00:22:35.274 Change Count : 0 00:22:35.274 ANA State : 1 00:22:35.274 Namespace Identifier : 1 00:22:35.274 00:22:35.274 Commands Supported and Effects 00:22:35.274 ============================== 00:22:35.274 Admin Commands 00:22:35.274 -------------- 00:22:35.274 Get Log Page (02h): Supported 00:22:35.274 Identify (06h): Supported 00:22:35.274 Abort (08h): Supported 00:22:35.274 Set Features (09h): Supported 00:22:35.274 Get Features (0Ah): Supported 00:22:35.274 Asynchronous Event Request (0Ch): Supported 00:22:35.274 Keep Alive (18h): Supported 00:22:35.274 I/O Commands 00:22:35.274 ------------ 00:22:35.274 Flush (00h): Supported 00:22:35.275 Write (01h): Supported LBA-Change 00:22:35.275 Read (02h): Supported 00:22:35.275 Write Zeroes (08h): Supported LBA-Change 00:22:35.275 Dataset Management (09h): Supported 00:22:35.275 00:22:35.275 Error Log 00:22:35.275 ========= 00:22:35.275 Entry: 0 00:22:35.275 Error Count: 0x3 00:22:35.275 Submission Queue Id: 0x0 00:22:35.275 Command Id: 0x5 00:22:35.275 Phase Bit: 0 00:22:35.275 Status Code: 0x2 00:22:35.275 Status Code Type: 0x0 00:22:35.275 Do Not Retry: 1 00:22:35.275 Error Location: 0x28 00:22:35.275 LBA: 0x0 00:22:35.275 Namespace: 0x0 00:22:35.275 Vendor Log Page: 0x0 00:22:35.275 ----------- 00:22:35.275 Entry: 1 00:22:35.275 Error Count: 0x2 00:22:35.275 Submission Queue Id: 0x0 00:22:35.275 Command Id: 0x5 00:22:35.275 Phase Bit: 0 00:22:35.275 Status Code: 0x2 00:22:35.275 Status Code Type: 0x0 00:22:35.275 Do Not Retry: 1 00:22:35.275 Error Location: 0x28 00:22:35.275 LBA: 0x0 00:22:35.275 Namespace: 0x0 00:22:35.275 Vendor Log Page: 0x0 00:22:35.275 ----------- 00:22:35.275 Entry: 2 00:22:35.275 Error Count: 0x1 00:22:35.275 Submission Queue Id: 0x0 00:22:35.275 Command Id: 0x4 00:22:35.275 Phase Bit: 0 00:22:35.275 Status Code: 0x2 00:22:35.275 Status Code Type: 0x0 00:22:35.275 Do Not Retry: 1 00:22:35.275 Error Location: 0x28 00:22:35.275 LBA: 0x0 00:22:35.275 Namespace: 0x0 00:22:35.275 Vendor Log Page: 0x0 00:22:35.275 00:22:35.275 Number of Queues 00:22:35.275 ================ 00:22:35.275 Number of I/O Submission Queues: 128 00:22:35.275 Number of I/O Completion Queues: 128 00:22:35.275 00:22:35.275 ZNS Specific Controller Data 00:22:35.275 ============================ 00:22:35.275 Zone Append Size Limit: 0 00:22:35.275 00:22:35.275 00:22:35.275 Active Namespaces 00:22:35.275 ================= 00:22:35.275 get_feature(0x05) failed 00:22:35.275 Namespace ID:1 00:22:35.275 Command Set Identifier: NVM (00h) 00:22:35.275 Deallocate: Supported 00:22:35.275 Deallocated/Unwritten Error: Not Supported 00:22:35.275 Deallocated Read Value: Unknown 00:22:35.275 Deallocate in Write Zeroes: Not Supported 00:22:35.275 Deallocated Guard Field: 0xFFFF 00:22:35.275 Flush: Supported 00:22:35.275 Reservation: Not Supported 00:22:35.275 Namespace Sharing Capabilities: Multiple Controllers 00:22:35.275 Size (in LBAs): 1953525168 (931GiB) 00:22:35.275 Capacity (in LBAs): 1953525168 (931GiB) 00:22:35.275 Utilization (in LBAs): 1953525168 (931GiB) 00:22:35.275 UUID: 83c082c9-f7b8-4737-8953-2e478e9e9823 00:22:35.275 Thin Provisioning: Not Supported 00:22:35.275 Per-NS Atomic Units: Yes 00:22:35.275 Atomic Boundary Size (Normal): 0 00:22:35.275 Atomic Boundary Size (PFail): 0 00:22:35.275 Atomic Boundary Offset: 0 00:22:35.275 NGUID/EUI64 Never Reused: No 00:22:35.275 ANA group ID: 1 00:22:35.275 Namespace Write Protected: No 00:22:35.275 Number of LBA Formats: 1 00:22:35.275 Current LBA Format: LBA Format #00 00:22:35.275 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:35.275 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.275 rmmod nvme_tcp 00:22:35.275 rmmod nvme_fabrics 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.275 20:50:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.173 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.173 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:37.173 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:37.173 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:37.431 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:37.431 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:37.431 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:37.431 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:37.431 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:37.431 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:37.431 20:50:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:38.804 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:38.804 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:38.804 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:38.804 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:38.804 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:38.804 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:38.804 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:38.804 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:38.804 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:39.738 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:39.738 00:22:39.738 real 0m9.464s 00:22:39.738 user 0m1.949s 00:22:39.738 sys 0m3.443s 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.738 ************************************ 00:22:39.738 END TEST nvmf_identify_kernel_target 00:22:39.738 ************************************ 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.738 ************************************ 00:22:39.738 START TEST nvmf_auth_host 00:22:39.738 ************************************ 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:39.738 * Looking for test storage... 00:22:39.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.738 20:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:42.267 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:42.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:42.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:42.268 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:42.268 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:42.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:22:42.268 00:22:42.268 --- 10.0.0.2 ping statistics --- 00:22:42.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.268 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:22:42.268 00:22:42.268 --- 10.0.0.1 ping statistics --- 00:22:42.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.268 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1675089 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1675089 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1675089 ']' 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.268 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e734f84d5cade6247fa33eb9b9a3f69 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LWl 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e734f84d5cade6247fa33eb9b9a3f69 0 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e734f84d5cade6247fa33eb9b9a3f69 0 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e734f84d5cade6247fa33eb9b9a3f69 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:42.269 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LWl 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LWl 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LWl 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90bbe125ecd2e6dce8d5c67f7c65a5813e8244af7aa277101f574a1d1f4407cd 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LhW 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90bbe125ecd2e6dce8d5c67f7c65a5813e8244af7aa277101f574a1d1f4407cd 3 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90bbe125ecd2e6dce8d5c67f7c65a5813e8244af7aa277101f574a1d1f4407cd 3 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90bbe125ecd2e6dce8d5c67f7c65a5813e8244af7aa277101f574a1d1f4407cd 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LhW 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LhW 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.LhW 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ecb2e6c2cf9cf79c668e03eb3a8df7e36e1c1024282ebdc3 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:42.527 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NWD 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ecb2e6c2cf9cf79c668e03eb3a8df7e36e1c1024282ebdc3 0 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ecb2e6c2cf9cf79c668e03eb3a8df7e36e1c1024282ebdc3 0 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ecb2e6c2cf9cf79c668e03eb3a8df7e36e1c1024282ebdc3 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NWD 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NWD 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NWD 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e84fa7dfb34d2c9db3eb9a5740e4c91359a5cf4f1328bc33 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.q58 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e84fa7dfb34d2c9db3eb9a5740e4c91359a5cf4f1328bc33 2 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e84fa7dfb34d2c9db3eb9a5740e4c91359a5cf4f1328bc33 2 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e84fa7dfb34d2c9db3eb9a5740e4c91359a5cf4f1328bc33 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.q58 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.q58 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.q58 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bef909626fe1d9b1c0514a3123bfe62b 00:22:42.528 20:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PHt 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bef909626fe1d9b1c0514a3123bfe62b 1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bef909626fe1d9b1c0514a3123bfe62b 1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bef909626fe1d9b1c0514a3123bfe62b 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PHt 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PHt 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PHt 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e38b3ac9344ea05c492dacb36859f98e 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iCI 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e38b3ac9344ea05c492dacb36859f98e 1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e38b3ac9344ea05c492dacb36859f98e 1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e38b3ac9344ea05c492dacb36859f98e 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.528 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iCI 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iCI 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.iCI 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd0d31d049a28184d2568c80ceb6b334a32d6d3559e89f32 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XtC 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd0d31d049a28184d2568c80ceb6b334a32d6d3559e89f32 2 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd0d31d049a28184d2568c80ceb6b334a32d6d3559e89f32 2 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd0d31d049a28184d2568c80ceb6b334a32d6d3559e89f32 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XtC 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XtC 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.XtC 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1b785912475e03af322e6e14179afdfd 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MZF 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1b785912475e03af322e6e14179afdfd 0 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1b785912475e03af322e6e14179afdfd 0 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1b785912475e03af322e6e14179afdfd 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MZF 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MZF 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.MZF 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:42.787 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=213c279be0491807aca68da9b511977022f084a842181dde33f23e2e10f85249 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cjG 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 213c279be0491807aca68da9b511977022f084a842181dde33f23e2e10f85249 3 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 213c279be0491807aca68da9b511977022f084a842181dde33f23e2e10f85249 3 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=213c279be0491807aca68da9b511977022f084a842181dde33f23e2e10f85249 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cjG 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cjG 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cjG 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1675089 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1675089 ']' 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.788 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LWl 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.LhW ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LhW 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NWD 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.q58 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q58 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PHt 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.iCI ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iCI 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.XtC 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.MZF ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.MZF 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cjG 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:43.047 20:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:44.422 Waiting for block devices as requested 00:22:44.422 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:44.422 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:44.680 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:44.680 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:44.680 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:44.937 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:44.937 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:44.937 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:45.195 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:45.195 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:45.195 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:45.195 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:45.453 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:45.453 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:45.453 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:45.453 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:45.710 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:45.968 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:45.969 No valid GPT data, bailing 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:45.969 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:46.227 00:22:46.227 Discovery Log Number of Records 2, Generation counter 2 00:22:46.227 =====Discovery Log Entry 0====== 00:22:46.227 trtype: tcp 00:22:46.227 adrfam: ipv4 00:22:46.227 subtype: current discovery subsystem 00:22:46.227 treq: not specified, sq flow control disable supported 00:22:46.227 portid: 1 00:22:46.227 trsvcid: 4420 00:22:46.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:46.227 traddr: 10.0.0.1 00:22:46.227 eflags: none 00:22:46.227 sectype: none 00:22:46.227 =====Discovery Log Entry 1====== 00:22:46.227 trtype: tcp 00:22:46.227 adrfam: ipv4 00:22:46.227 subtype: nvme subsystem 00:22:46.227 treq: not specified, sq flow control disable supported 00:22:46.227 portid: 1 00:22:46.227 trsvcid: 4420 00:22:46.227 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:46.227 traddr: 10.0.0.1 00:22:46.227 eflags: none 00:22:46.227 sectype: none 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.227 nvme0n1 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.227 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.485 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.486 nvme0n1 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.486 20:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.486 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.744 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.745 nvme0n1 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.745 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.003 nvme0n1 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.003 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.262 nvme0n1 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.262 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.520 nvme0n1 00:22:47.520 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.520 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.520 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.520 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.521 20:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.779 nvme0n1 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:47.779 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.780 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.037 nvme0n1 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.037 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.295 nvme0n1 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.295 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 nvme0n1 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.553 20:50:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.553 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.812 nvme0n1 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.812 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.071 nvme0n1 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.071 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.637 nvme0n1 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.637 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.638 20:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.922 nvme0n1 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.922 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.203 nvme0n1 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.203 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.204 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.462 nvme0n1 00:22:50.462 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.462 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.462 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.462 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.462 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.462 20:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.462 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.722 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.289 nvme0n1 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.289 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.290 20:50:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.547 nvme0n1 00:22:51.547 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.547 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.547 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.547 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.547 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.806 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.373 nvme0n1 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.373 20:50:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 nvme0n1 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.940 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.506 nvme0n1 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.506 20:50:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.506 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.437 nvme0n1 00:22:54.437 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.437 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.437 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.437 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.437 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.437 20:50:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.695 20:50:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.627 nvme0n1 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.627 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.628 20:50:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.561 nvme0n1 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.561 20:50:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.939 nvme0n1 00:22:57.939 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.939 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.940 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:57.941 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.942 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.943 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:57.943 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.943 20:50:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.880 nvme0n1 00:22:58.880 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.880 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.880 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.880 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.880 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.881 nvme0n1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.881 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.139 nvme0n1 00:22:59.139 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.140 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.397 nvme0n1 00:22:59.397 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.397 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.397 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.398 20:50:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.656 nvme0n1 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.656 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.915 nvme0n1 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.915 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.916 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.175 nvme0n1 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.175 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.433 nvme0n1 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.433 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.434 20:50:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.692 nvme0n1 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.692 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.950 nvme0n1 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.950 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.951 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.210 nvme0n1 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.210 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 nvme0n1 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.470 20:50:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.728 nvme0n1 00:23:01.728 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.729 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.729 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.729 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.729 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.729 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.729 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.987 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.988 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.988 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.988 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.988 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.988 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.988 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.247 nvme0n1 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.247 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.506 nvme0n1 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.506 20:50:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.506 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.764 nvme0n1 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.764 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.048 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.049 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.049 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.049 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.049 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.049 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.327 nvme0n1 00:23:03.327 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.586 20:50:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.152 nvme0n1 00:23:04.152 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.152 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.152 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.152 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.152 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.152 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.152 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.153 20:50:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.720 nvme0n1 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.720 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.286 nvme0n1 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.286 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.287 20:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.853 nvme0n1 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.853 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.854 20:51:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.788 nvme0n1 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.788 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.789 20:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.720 nvme0n1 00:23:07.720 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.720 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.979 20:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.912 nvme0n1 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:08.912 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.913 20:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.844 nvme0n1 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.844 20:51:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.777 nvme0n1 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:10.777 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.778 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.036 nvme0n1 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.036 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.294 nvme0n1 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.294 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.295 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.554 nvme0n1 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.554 20:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.812 nvme0n1 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.812 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.813 nvme0n1 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.813 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.071 nvme0n1 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.071 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.072 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.330 nvme0n1 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.330 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.588 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.589 20:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.589 nvme0n1 00:23:12.589 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.589 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.589 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.589 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.589 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.589 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 nvme0n1 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.847 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.106 nvme0n1 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.106 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.365 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.624 nvme0n1 00:23:13.624 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.624 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.624 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.624 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.624 20:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.624 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.883 nvme0n1 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.883 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.448 nvme0n1 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.448 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.449 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.449 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.449 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.449 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.449 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.449 20:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.707 nvme0n1 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.707 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.965 nvme0n1 00:23:14.965 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.965 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.965 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.966 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.532 nvme0n1 00:23:15.532 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.532 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.532 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.532 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.532 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.532 20:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.532 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.098 nvme0n1 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.098 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.355 20:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.921 nvme0n1 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.921 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.487 nvme0n1 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.487 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.488 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.488 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.488 20:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.052 nvme0n1 00:23:18.052 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.052 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWU3MzRmODRkNWNhZGU2MjQ3ZmEzM2ViOWI5YTNmNjlrhG0b: 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBiYmUxMjVlY2QyZTZkY2U4ZDVjNjdmN2M2NWE1ODEzZTgyNDRhZjdhYTI3NzEwMWY1NzRhMWQxZjQ0MDdjZIW/ZDk=: 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.053 20:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.986 nvme0n1 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.986 20:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.920 nvme0n1 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmVmOTA5NjI2ZmUxZDliMWMwNTE0YTMxMjNiZmU2MmI+T3WC: 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: ]] 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTM4YjNhYzkzNDRlYTA1YzQ5MmRhY2IzNjg1OWY5OGUh8EV5: 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:19.920 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.921 20:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.894 nvme0n1 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.894 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQwZDMxZDA0OWEyODE4NGQyNTY4YzgwY2ViNmIzMzRhMzJkNmQzNTU5ZTg5ZjMymLcLIw==: 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: ]] 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI3ODU5MTI0NzVlMDNhZjMyMmU2ZTE0MTc5YWZkZmTWPks3: 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.152 20:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.084 nvme0n1 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjEzYzI3OWJlMDQ5MTgwN2FjYTY4ZGE5YjUxMTk3NzAyMmYwODRhODQyMTgxZGRlMzNmMjNlMmUxMGY4NTI0OYxgplo=: 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.084 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.085 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.085 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.085 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.085 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.085 20:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.018 nvme0n1 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNiMmU2YzJjZjljZjc5YzY2OGUwM2ViM2E4ZGY3ZTM2ZTFjMTAyNDI4MmViZGMzRJW+yA==: 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTg0ZmE3ZGZiMzRkMmM5ZGIzZWI5YTU3NDBlNGM5MTM1OWE1Y2Y0ZjEzMjhiYzMzuJ/3xA==: 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.018 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.277 request: 00:23:23.277 { 00:23:23.277 "name": "nvme0", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.1", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:23.277 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:23.277 "prchk_reftag": false, 00:23:23.277 "prchk_guard": false, 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false, 00:23:23.277 "method": "bdev_nvme_attach_controller", 00:23:23.277 "req_id": 1 00:23:23.277 } 00:23:23.277 Got JSON-RPC error response 00:23:23.277 response: 00:23:23.277 { 00:23:23.277 "code": -5, 00:23:23.277 "message": "Input/output error" 00:23:23.277 } 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.277 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.277 request: 00:23:23.277 { 00:23:23.277 "name": "nvme0", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.1", 00:23:23.277 "adrfam": "ipv4", 00:23:23.278 "trsvcid": "4420", 00:23:23.278 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:23.278 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:23.278 "prchk_reftag": false, 00:23:23.278 "prchk_guard": false, 00:23:23.278 "hdgst": false, 00:23:23.278 "ddgst": false, 00:23:23.278 "dhchap_key": "key2", 00:23:23.278 "method": "bdev_nvme_attach_controller", 00:23:23.278 "req_id": 1 00:23:23.278 } 00:23:23.278 Got JSON-RPC error response 00:23:23.278 response: 00:23:23.278 { 00:23:23.278 "code": -5, 00:23:23.278 "message": "Input/output error" 00:23:23.278 } 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.278 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.536 request: 00:23:23.536 { 00:23:23.536 "name": "nvme0", 00:23:23.536 "trtype": "tcp", 00:23:23.536 "traddr": "10.0.0.1", 00:23:23.536 "adrfam": "ipv4", 00:23:23.536 "trsvcid": "4420", 00:23:23.536 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:23.536 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:23.536 "prchk_reftag": false, 00:23:23.536 "prchk_guard": false, 00:23:23.536 "hdgst": false, 00:23:23.536 "ddgst": false, 00:23:23.536 "dhchap_key": "key1", 00:23:23.536 "dhchap_ctrlr_key": "ckey2", 00:23:23.536 "method": "bdev_nvme_attach_controller", 00:23:23.536 "req_id": 1 00:23:23.536 } 00:23:23.536 Got JSON-RPC error response 00:23:23.536 response: 00:23:23.536 { 00:23:23.536 "code": -5, 00:23:23.536 "message": "Input/output error" 00:23:23.536 } 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.536 rmmod nvme_tcp 00:23:23.536 rmmod nvme_fabrics 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1675089 ']' 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1675089 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1675089 ']' 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1675089 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1675089 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1675089' 00:23:23.536 killing process with pid 1675089 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1675089 00:23:23.536 20:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1675089 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.793 20:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.689 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.946 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:25.946 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:25.946 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:25.947 20:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:26.879 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:27.137 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:27.137 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:27.137 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:27.137 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:27.137 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:27.137 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:27.137 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:27.137 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:28.069 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:28.069 20:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LWl /tmp/spdk.key-null.NWD /tmp/spdk.key-sha256.PHt /tmp/spdk.key-sha384.XtC /tmp/spdk.key-sha512.cjG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:28.069 20:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:29.442 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:29.442 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:29.442 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:29.442 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:29.442 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:29.442 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:29.442 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:29.442 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:29.442 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:29.442 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:29.442 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:29.442 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:29.442 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:29.442 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:29.442 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:29.442 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:29.442 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:29.442 00:23:29.442 real 0m49.664s 00:23:29.442 user 0m47.394s 00:23:29.442 sys 0m5.807s 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.442 ************************************ 00:23:29.442 END TEST nvmf_auth_host 00:23:29.442 ************************************ 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.442 ************************************ 00:23:29.442 START TEST nvmf_digest 00:23:29.442 ************************************ 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:29.442 * Looking for test storage... 00:23:29.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.442 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.443 20:51:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.971 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:31.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:31.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:31.972 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:31.972 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:23:31.972 00:23:31.972 --- 10.0.0.2 ping statistics --- 00:23:31.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.972 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:23:31.972 00:23:31.972 --- 10.0.0.1 ping statistics --- 00:23:31.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.972 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.972 ************************************ 00:23:31.972 START TEST nvmf_digest_clean 00:23:31.972 ************************************ 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1685177 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1685177 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1685177 ']' 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.972 20:51:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:31.972 [2024-07-24 20:51:27.275786] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:31.972 [2024-07-24 20:51:27.275870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.972 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.972 [2024-07-24 20:51:27.345606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.972 [2024-07-24 20:51:27.464480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.972 [2024-07-24 20:51:27.464558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.973 [2024-07-24 20:51:27.464575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.973 [2024-07-24 20:51:27.464589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.973 [2024-07-24 20:51:27.464600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.973 [2024-07-24 20:51:27.464641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:32.908 null0 00:23:32.908 [2024-07-24 20:51:28.332364] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.908 [2024-07-24 20:51:28.356586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1685334 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1685334 /var/tmp/bperf.sock 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1685334 ']' 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:32.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.908 20:51:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:32.908 [2024-07-24 20:51:28.406253] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:32.908 [2024-07-24 20:51:28.406332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685334 ] 00:23:32.908 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.908 [2024-07-24 20:51:28.466952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.166 [2024-07-24 20:51:28.584598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.099 20:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.099 20:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:34.099 20:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:34.099 20:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:34.099 20:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:34.358 20:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.358 20:51:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.616 nvme0n1 00:23:34.616 20:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:34.616 20:51:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:34.616 Running I/O for 2 seconds... 00:23:37.143 00:23:37.143 Latency(us) 00:23:37.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.143 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:37.143 nvme0n1 : 2.00 18865.50 73.69 0.00 0.00 6776.69 2973.39 13010.11 00:23:37.143 =================================================================================================================== 00:23:37.143 Total : 18865.50 73.69 0.00 0.00 6776.69 2973.39 13010.11 00:23:37.143 0 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:37.143 | select(.opcode=="crc32c") 00:23:37.143 | "\(.module_name) \(.executed)"' 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1685334 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1685334 ']' 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1685334 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685334 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685334' 00:23:37.143 killing process with pid 1685334 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1685334 00:23:37.143 Received shutdown signal, test time was about 2.000000 seconds 00:23:37.143 00:23:37.143 Latency(us) 00:23:37.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.143 =================================================================================================================== 00:23:37.143 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1685334 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1685864 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1685864 /var/tmp/bperf.sock 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1685864 ']' 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:37.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.143 20:51:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:37.402 [2024-07-24 20:51:32.751783] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:37.402 [2024-07-24 20:51:32.751862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685864 ] 00:23:37.402 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:37.402 Zero copy mechanism will not be used. 00:23:37.402 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.402 [2024-07-24 20:51:32.814972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.402 [2024-07-24 20:51:32.935128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.335 20:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.335 20:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:38.335 20:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:38.335 20:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:38.335 20:51:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:38.593 20:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:38.593 20:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.158 nvme0n1 00:23:39.158 20:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:39.158 20:51:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:39.158 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:39.158 Zero copy mechanism will not be used. 00:23:39.158 Running I/O for 2 seconds... 00:23:41.724 00:23:41.724 Latency(us) 00:23:41.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.724 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:41.724 nvme0n1 : 2.00 4320.23 540.03 0.00 0.00 3699.04 1037.65 5048.70 00:23:41.724 =================================================================================================================== 00:23:41.724 Total : 4320.23 540.03 0.00 0.00 3699.04 1037.65 5048.70 00:23:41.724 0 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:41.724 | select(.opcode=="crc32c") 00:23:41.724 | "\(.module_name) \(.executed)"' 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1685864 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1685864 ']' 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1685864 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685864 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685864' 00:23:41.724 killing process with pid 1685864 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1685864 00:23:41.724 Received shutdown signal, test time was about 2.000000 seconds 00:23:41.724 00:23:41.724 Latency(us) 00:23:41.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.724 =================================================================================================================== 00:23:41.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.724 20:51:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1685864 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1686402 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1686402 /var/tmp/bperf.sock 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1686402 ']' 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:41.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.724 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:41.982 [2024-07-24 20:51:37.291830] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:41.982 [2024-07-24 20:51:37.291908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686402 ] 00:23:41.982 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.982 [2024-07-24 20:51:37.349470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.982 [2024-07-24 20:51:37.455251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.982 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.982 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:41.982 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:41.982 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:41.982 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:42.548 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.548 20:51:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.806 nvme0n1 00:23:42.806 20:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:42.806 20:51:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:42.806 Running I/O for 2 seconds... 00:23:45.336 00:23:45.336 Latency(us) 00:23:45.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.336 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:45.336 nvme0n1 : 2.01 20261.63 79.15 0.00 0.00 6306.81 3519.53 17573.36 00:23:45.336 =================================================================================================================== 00:23:45.336 Total : 20261.63 79.15 0.00 0.00 6306.81 3519.53 17573.36 00:23:45.336 0 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:45.336 | select(.opcode=="crc32c") 00:23:45.336 | "\(.module_name) \(.executed)"' 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1686402 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1686402 ']' 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1686402 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686402 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686402' 00:23:45.336 killing process with pid 1686402 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1686402 00:23:45.336 Received shutdown signal, test time was about 2.000000 seconds 00:23:45.336 00:23:45.336 Latency(us) 00:23:45.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.336 =================================================================================================================== 00:23:45.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.336 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1686402 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1686816 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1686816 /var/tmp/bperf.sock 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1686816 ']' 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:45.594 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.595 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:45.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:45.595 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.595 20:51:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:45.595 [2024-07-24 20:51:40.993363] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:45.595 [2024-07-24 20:51:40.993443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686816 ] 00:23:45.595 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:45.595 Zero copy mechanism will not be used. 00:23:45.595 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.595 [2024-07-24 20:51:41.050970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.595 [2024-07-24 20:51:41.159228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.852 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.852 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:45.852 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:45.852 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:45.852 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:46.110 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:46.110 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:46.368 nvme0n1 00:23:46.368 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:46.368 20:51:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:46.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:46.626 Zero copy mechanism will not be used. 00:23:46.626 Running I/O for 2 seconds... 00:23:48.521 00:23:48.521 Latency(us) 00:23:48.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.521 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:48.521 nvme0n1 : 2.00 4971.96 621.50 0.00 0.00 3209.89 2208.81 8592.50 00:23:48.521 =================================================================================================================== 00:23:48.521 Total : 4971.96 621.50 0.00 0.00 3209.89 2208.81 8592.50 00:23:48.521 0 00:23:48.521 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:48.521 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:48.521 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:48.521 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:48.521 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:48.521 | select(.opcode=="crc32c") 00:23:48.521 | "\(.module_name) \(.executed)"' 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1686816 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1686816 ']' 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1686816 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686816 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686816' 00:23:48.779 killing process with pid 1686816 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1686816 00:23:48.779 Received shutdown signal, test time was about 2.000000 seconds 00:23:48.779 00:23:48.779 Latency(us) 00:23:48.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.779 =================================================================================================================== 00:23:48.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.779 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1686816 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1685177 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1685177 ']' 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1685177 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685177 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685177' 00:23:49.346 killing process with pid 1685177 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1685177 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1685177 00:23:49.346 00:23:49.346 real 0m17.689s 00:23:49.346 user 0m35.069s 00:23:49.346 sys 0m4.256s 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.346 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:49.346 ************************************ 00:23:49.346 END TEST nvmf_digest_clean 00:23:49.346 ************************************ 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 ************************************ 00:23:49.604 START TEST nvmf_digest_error 00:23:49.604 ************************************ 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1687258 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1687258 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1687258 ']' 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.604 20:51:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 [2024-07-24 20:51:45.017162] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:49.604 [2024-07-24 20:51:45.017273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.604 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.604 [2024-07-24 20:51:45.082373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.861 [2024-07-24 20:51:45.188657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.861 [2024-07-24 20:51:45.188705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.861 [2024-07-24 20:51:45.188733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.861 [2024-07-24 20:51:45.188744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.861 [2024-07-24 20:51:45.188753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.861 [2024-07-24 20:51:45.188793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.861 [2024-07-24 20:51:45.249316] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.861 null0 00:23:49.861 [2024-07-24 20:51:45.367451] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.861 [2024-07-24 20:51:45.391719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1687389 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1687389 /var/tmp/bperf.sock 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1687389 ']' 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:49.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.861 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:50.119 [2024-07-24 20:51:45.442857] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:50.119 [2024-07-24 20:51:45.442932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687389 ] 00:23:50.119 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.119 [2024-07-24 20:51:45.504196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.119 [2024-07-24 20:51:45.621573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.377 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.377 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:50.377 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:50.377 20:51:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:50.635 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:50.635 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.635 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:50.635 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.635 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:50.635 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:51.201 nvme0n1 00:23:51.201 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:51.201 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.201 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.201 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.201 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:51.201 20:51:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:51.201 Running I/O for 2 seconds... 00:23:51.201 [2024-07-24 20:51:46.708824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.201 [2024-07-24 20:51:46.708891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.201 [2024-07-24 20:51:46.708910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.201 [2024-07-24 20:51:46.723742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.201 [2024-07-24 20:51:46.723775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.201 [2024-07-24 20:51:46.723793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.201 [2024-07-24 20:51:46.735030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.201 [2024-07-24 20:51:46.735076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.201 [2024-07-24 20:51:46.735094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.201 [2024-07-24 20:51:46.747733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.201 [2024-07-24 20:51:46.747777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.201 [2024-07-24 20:51:46.747792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.201 [2024-07-24 20:51:46.761206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.201 [2024-07-24 20:51:46.761270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.201 [2024-07-24 20:51:46.761288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.459 [2024-07-24 20:51:46.774333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.459 [2024-07-24 20:51:46.774365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.459 [2024-07-24 20:51:46.774382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.459 [2024-07-24 20:51:46.787046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.459 [2024-07-24 20:51:46.787076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.459 [2024-07-24 20:51:46.787108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.459 [2024-07-24 20:51:46.799312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.459 [2024-07-24 20:51:46.799354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.459 [2024-07-24 20:51:46.799371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.459 [2024-07-24 20:51:46.812653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.459 [2024-07-24 20:51:46.812684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.459 [2024-07-24 20:51:46.812702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.459 [2024-07-24 20:51:46.826481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.459 [2024-07-24 20:51:46.826512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.826529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.838806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.838837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.838854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.855253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.855282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.855313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.866310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.866339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.866370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.880490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.880519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.880534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.895107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.895135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.895169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.906574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.906603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.906632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.920596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.920626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.920643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.935542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.935573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.935589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.950148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.950179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.950196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.962275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.962305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.962323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.974323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.974355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.974373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:46.985647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:46.985676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:46.985712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:47.000079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:47.000125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:47.000143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.460 [2024-07-24 20:51:47.017360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.460 [2024-07-24 20:51:47.017390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.460 [2024-07-24 20:51:47.017422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.028129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.028158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.028173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.043851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.043883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.043899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.058721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.058750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.058782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.070896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.070924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.070954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.083882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.083911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.083927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.097145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.097173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.097203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.109721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.109756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.109787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.122062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.122090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.122121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.135810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.135837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.135869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.148564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.148594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.148610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.159156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.159202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.159218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.173887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.173915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.173945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.185491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.185521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.719 [2024-07-24 20:51:47.185538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.719 [2024-07-24 20:51:47.198951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.719 [2024-07-24 20:51:47.198982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.720 [2024-07-24 20:51:47.198998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.720 [2024-07-24 20:51:47.209882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.720 [2024-07-24 20:51:47.209910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.720 [2024-07-24 20:51:47.209940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.720 [2024-07-24 20:51:47.223578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.720 [2024-07-24 20:51:47.223623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.720 [2024-07-24 20:51:47.223638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.720 [2024-07-24 20:51:47.236453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.720 [2024-07-24 20:51:47.236487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.720 [2024-07-24 20:51:47.236504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.720 [2024-07-24 20:51:47.249665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.720 [2024-07-24 20:51:47.249696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.720 [2024-07-24 20:51:47.249726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.720 [2024-07-24 20:51:47.261632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.720 [2024-07-24 20:51:47.261676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.720 [2024-07-24 20:51:47.261692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.720 [2024-07-24 20:51:47.274315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.720 [2024-07-24 20:51:47.274346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.720 [2024-07-24 20:51:47.274363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.288401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.288431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.288447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.299081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.299110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.299142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.314464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.314494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.314537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.330543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.330573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.330612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.343984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.344015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.344046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.357885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.357916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.357933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.369281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.369335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.369351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.384262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.384302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.384318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.396424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.396454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.396486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.412196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.412226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.412263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.427308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.427353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.427370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.439511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.439541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.439571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.454045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.454077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.454093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.469266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.469296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.469313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.480943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.480971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.481001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.496401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.496432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.496465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.508629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.508659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.508692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.523348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.523378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.523413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.979 [2024-07-24 20:51:47.536319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:51.979 [2024-07-24 20:51:47.536364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.979 [2024-07-24 20:51:47.536379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.549969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.550002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.550021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.563396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.563424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.563461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.577749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.577778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.577795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.589538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.589582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.589601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.605319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.605349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.605380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.618435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.618465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.618497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.631264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.631309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.631324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.646622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.646651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.646683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.660069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.660103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.660121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.673790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.673823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.673841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.687320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.687354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.687386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.698425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.698452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.698483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.238 [2024-07-24 20:51:47.712206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.238 [2024-07-24 20:51:47.712239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.238 [2024-07-24 20:51:47.712267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.239 [2024-07-24 20:51:47.726717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.239 [2024-07-24 20:51:47.726746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.239 [2024-07-24 20:51:47.726762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.239 [2024-07-24 20:51:47.740548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.239 [2024-07-24 20:51:47.740578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.239 [2024-07-24 20:51:47.740594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.239 [2024-07-24 20:51:47.752951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.239 [2024-07-24 20:51:47.752986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.239 [2024-07-24 20:51:47.753006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.239 [2024-07-24 20:51:47.770451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.239 [2024-07-24 20:51:47.770486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.239 [2024-07-24 20:51:47.770504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.239 [2024-07-24 20:51:47.787643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.239 [2024-07-24 20:51:47.787678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.239 [2024-07-24 20:51:47.787697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.239 [2024-07-24 20:51:47.799147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.239 [2024-07-24 20:51:47.799180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.239 [2024-07-24 20:51:47.799198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.813374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.813425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.813441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.827538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.827582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.827601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.840643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.840672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.840703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.856344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.856372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.856403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.869192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.869226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.869252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.884252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.884279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.884310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.895921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.895954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.895973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.910388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.910415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.910446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.925443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.925472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.925509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.937588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.937621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.937639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.951385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.951413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.951442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.964060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.964093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.964112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.978784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.978813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.978842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:47.994223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:47.994273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:47.994290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:48.005489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.497 [2024-07-24 20:51:48.005534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.497 [2024-07-24 20:51:48.005550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.497 [2024-07-24 20:51:48.019341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.498 [2024-07-24 20:51:48.019371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.498 [2024-07-24 20:51:48.019403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.498 [2024-07-24 20:51:48.033536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.498 [2024-07-24 20:51:48.033571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.498 [2024-07-24 20:51:48.033590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.498 [2024-07-24 20:51:48.046223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.498 [2024-07-24 20:51:48.046263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.498 [2024-07-24 20:51:48.046296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.498 [2024-07-24 20:51:48.059217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.498 [2024-07-24 20:51:48.059266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.498 [2024-07-24 20:51:48.059285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.074612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.074641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.074674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.086630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.086664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.086683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.102692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.102726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.102744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.113828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.113861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.113878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.130096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.130131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.130149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.145486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.145515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.145547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.156977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.157011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.157029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.171979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.172022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.172039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.184436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.184463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.184493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.200188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.200221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.200239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.216002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.216045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.216062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.228385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.228422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.228453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.243994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.244027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.244046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.259673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.259703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.259719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.271121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.271157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.271175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.285329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.756 [2024-07-24 20:51:48.285359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.756 [2024-07-24 20:51:48.285396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.756 [2024-07-24 20:51:48.298701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.757 [2024-07-24 20:51:48.298747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.757 [2024-07-24 20:51:48.298763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:52.757 [2024-07-24 20:51:48.312127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:52.757 [2024-07-24 20:51:48.312156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.757 [2024-07-24 20:51:48.312190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.325337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.325366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.325397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.339515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.339557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.339575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.351351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.351380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.351413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.365050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.365080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.365115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.379496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.379541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.379559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.392268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.392326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.392343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.404122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.404155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.404174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.420294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.420345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.420361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.435947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.435978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.436010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.447573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.447618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.447636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.460973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.461007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.461026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.474310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.474339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.474371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.487577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.487610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.487628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.502318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.502349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.502365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.514957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.514991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.515016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.528221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.528261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.528282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.540473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.540504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.540537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.555254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.555309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.555325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.015 [2024-07-24 20:51:48.571013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.015 [2024-07-24 20:51:48.571044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.015 [2024-07-24 20:51:48.571061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.582392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.582421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.582454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.596297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.596326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.596358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.609090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.609135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.609152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.623181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.623209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.623240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.634959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.634994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.635026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.647206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.647236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.647263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.659970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.659999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.672832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.672860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.672891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 [2024-07-24 20:51:48.687118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a8cb0) 00:23:53.274 [2024-07-24 20:51:48.687163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.274 [2024-07-24 20:51:48.687179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.274 00:23:53.274 Latency(us) 00:23:53.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.274 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:53.274 nvme0n1 : 2.01 18806.99 73.46 0.00 0.00 6796.30 3689.43 19223.89 00:23:53.274 =================================================================================================================== 00:23:53.274 Total : 18806.99 73.46 0.00 0.00 6796.30 3689.43 19223.89 00:23:53.274 0 00:23:53.274 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:53.274 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:53.274 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:53.274 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:53.274 | .driver_specific 00:23:53.274 | .nvme_error 00:23:53.274 | .status_code 00:23:53.274 | .command_transient_transport_error' 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1687389 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1687389 ']' 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1687389 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687389 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687389' 00:23:53.532 killing process with pid 1687389 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1687389 00:23:53.532 Received shutdown signal, test time was about 2.000000 seconds 00:23:53.532 00:23:53.532 Latency(us) 00:23:53.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.532 =================================================================================================================== 00:23:53.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.532 20:51:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1687389 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1687805 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1687805 /var/tmp/bperf.sock 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1687805 ']' 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:53.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.790 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:53.790 [2024-07-24 20:51:49.302489] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:53.790 [2024-07-24 20:51:49.302573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687805 ] 00:23:53.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:53.790 Zero copy mechanism will not be used. 00:23:53.790 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.048 [2024-07-24 20:51:49.360448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.048 [2024-07-24 20:51:49.469704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.048 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.048 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:54.048 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:54.048 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:54.306 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:54.306 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.306 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:54.306 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.306 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.307 20:51:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.873 nvme0n1 00:23:54.873 20:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:54.873 20:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.873 20:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:54.873 20:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.873 20:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:54.873 20:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.873 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:54.873 Zero copy mechanism will not be used. 00:23:54.873 Running I/O for 2 seconds... 00:23:54.873 [2024-07-24 20:51:50.361076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.361140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.361161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.369140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.369177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.369198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.376827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.376862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.376881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.384562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.384605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.384622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.391871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.391904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.391922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.399156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.399190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.399209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.406398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.406427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.406459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.413571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.413613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.413629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.421358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.421389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.421406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.428842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.428875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.428893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.873 [2024-07-24 20:51:50.436139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:54.873 [2024-07-24 20:51:50.436172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.873 [2024-07-24 20:51:50.436191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.443355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.443384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.443416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.450790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.450830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.450848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.457780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.457812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.457831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.464843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.464877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.464895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.472001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.472034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.472053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.479167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.479200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.479218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.486660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.486689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.486721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.493540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.493571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.493587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.500234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.500286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.500311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.506960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.506989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.507022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.513620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.513649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.513680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.520349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.526952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.526982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.526998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.533492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.533521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.533537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.540020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.540049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.540066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.132 [2024-07-24 20:51:50.546655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.132 [2024-07-24 20:51:50.546684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.132 [2024-07-24 20:51:50.546715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.553472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.553503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.553520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.560521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.560569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.560586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.567849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.567881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.567920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.574692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.574736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.574752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.581513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.581557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.581574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.588457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.588500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.588516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.595417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.595447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.595463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.602104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.602133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.602163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.608740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.608769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.608785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.615470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.615499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.615515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.622102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.622149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.622166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.628687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.628725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.628743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.635429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.635473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.635490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.642303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.642333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.642349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.649057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.649121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.655821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.655851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.655867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.662655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.662687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.662719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.669466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.669496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.669512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.676251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.676282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.676298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.683061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.683106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.683122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.689749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.689794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.689810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.133 [2024-07-24 20:51:50.696384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.133 [2024-07-24 20:51:50.696413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-24 20:51:50.696430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.703094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.703124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.703142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.709813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.709842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.709858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.716649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.716679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.716699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.723489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.723532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.723547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.730444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.730488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.730504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.737306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.737352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.737369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.744063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.744106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.744127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.750829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.750859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.750876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.757557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.757603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.764446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.764476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.764493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.771228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.771267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.771284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.777951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.777982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.777998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.784811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.784853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.784871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.791683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.791713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.791729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.798503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.798532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.798548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.805421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.805450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.805481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.393 [2024-07-24 20:51:50.812228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.393 [2024-07-24 20:51:50.812278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.393 [2024-07-24 20:51:50.812296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.818827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.818856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.818872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.825674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.825704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.825721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.832431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.832475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.832491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.839233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.839269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.839292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.845874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.845922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.845938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.852541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.852584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.852600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.859397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.859426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.859448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.866514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.866545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.866576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.873091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.873123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.873140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.879973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.880004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.880021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.886802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.886833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.886850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.893508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.893537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.893554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.900308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.900339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.900355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.906979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.907008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.907025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.913726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.913756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.913772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.920481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.920517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.920534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.927288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.927318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.927334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.934029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.934071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.934086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.940790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.940816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.940831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.947684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.947714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.947731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.394 [2024-07-24 20:51:50.954474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.394 [2024-07-24 20:51:50.954504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.394 [2024-07-24 20:51:50.954521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:50.961172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:50.961216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:50.961233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:50.967770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:50.967801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:50.967817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:50.974412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:50.974442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:50.974459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:50.981397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:50.981428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:50.981445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:50.988133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:50.988163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:50.988179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:50.994735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:50.994779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:50.994795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.001454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.001484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.001500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.008389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.008433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.008448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.015329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.015373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.015389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.022126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.022156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.022187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.028895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.028924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.028953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.035673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.035720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.035742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.042447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.042477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.042493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.049115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.049145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.049161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.055707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.055749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.055765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.062500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.062542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.062558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.069160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.069188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.069220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.075931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.075960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.075976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.082884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.082913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.082944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.089750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.089780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.089796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.096553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.096603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.096621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.103511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.103541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.103557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.110271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.110299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.110316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.116879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.116923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.116938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.123506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.123538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.123555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.130073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.655 [2024-07-24 20:51:51.130103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.655 [2024-07-24 20:51:51.130120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.655 [2024-07-24 20:51:51.136808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.136852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.136868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.143498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.143527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.143543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.150319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.150348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.150364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.157171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.157199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.157230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.164066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.164108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.164124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.171004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.171033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.171065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.177726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.177771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.177788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.184551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.184580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.184597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.191343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.191372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.191389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.198082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.198125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.198142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.204805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.204834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.204851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.211510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.211544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.211561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.656 [2024-07-24 20:51:51.218251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.656 [2024-07-24 20:51:51.218292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.656 [2024-07-24 20:51:51.218308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.915 [2024-07-24 20:51:51.225036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.915 [2024-07-24 20:51:51.225066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.915 [2024-07-24 20:51:51.225082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.915 [2024-07-24 20:51:51.232070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.915 [2024-07-24 20:51:51.232099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.915 [2024-07-24 20:51:51.232116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.915 [2024-07-24 20:51:51.239010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.915 [2024-07-24 20:51:51.239056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.915 [2024-07-24 20:51:51.239072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.246048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.246077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.246107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.252836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.252880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.252897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.259625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.259654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.259670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.266434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.266479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.273267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.273296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.273313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.280006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.280037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.280053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.286742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.286785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.286800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.293599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.293627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.293660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.300408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.300437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.300454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.307199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.307229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.307253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.313936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.313966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.313983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.321158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.321188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.321205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.327888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.327932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.327953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.334665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.334696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.334713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.341396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.341427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.341443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.348059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.348103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.348119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.354818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.354848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.354864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.361533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.361562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.361578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.368085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.368131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.368147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.374720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.374752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.374769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.381301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.381345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.381361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.388048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.388099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.388116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.394903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.394933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.394950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.401560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.401589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.401605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.408339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.408369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.408385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.415171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.415202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.916 [2024-07-24 20:51:51.415218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.916 [2024-07-24 20:51:51.421714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.916 [2024-07-24 20:51:51.421757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.421774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.428372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.428401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.428417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.435032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.435076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.435092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.441730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.441760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.441776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.448427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.448456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.448472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.455547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.455574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.455607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.462906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.462938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.462956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.470324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.470353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.470373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.917 [2024-07-24 20:51:51.477606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:55.917 [2024-07-24 20:51:51.477638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.917 [2024-07-24 20:51:51.477657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.484904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.484936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.484954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.492347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.492375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.492391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.499735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.499768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.499786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.507022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.507054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.507079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.514217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.514257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.514277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.521468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.521497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.521513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.528686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.528719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.528737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.536031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.536064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.536082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.543256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.543303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.543319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.550452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.550481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.550497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.557664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.557696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.557714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.564948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.564981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.564999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.572317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.572356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.572388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.579758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.579792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.579810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.587262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.587313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.587329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.594828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.594862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.594881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.602184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.602217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.602235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.609444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.176 [2024-07-24 20:51:51.609473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.176 [2024-07-24 20:51:51.609489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.176 [2024-07-24 20:51:51.616704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.616737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.616755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.623948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.623981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.623999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.631217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.631261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.631283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.638421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.638451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.638467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.645689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.645722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.645741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.652991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.653023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.653042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.660161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.660194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.660212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.667384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.667427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.667443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.674789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.674821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.674839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.681956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.681988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.682006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.689208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.689247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.689268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.696967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.697006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.697025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.704322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.704366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.704382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.711581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.711614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.711632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.718914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.718946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.726503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.726549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.726569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.733681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.733714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.733732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.177 [2024-07-24 20:51:51.740939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.177 [2024-07-24 20:51:51.740972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.177 [2024-07-24 20:51:51.740991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.748110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.748143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.748161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.755371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.755400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.762546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.762574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.762589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.769893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.769926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.769944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.777213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.777252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.777272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.784715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.784747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.784765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.791925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.791957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.791975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.799118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.799150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.799168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.806463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.806494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.806510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.813747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.813779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.813796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.820985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.821019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.821042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.828611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.828646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.828665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.837423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.837467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.837482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.846991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.847027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.847046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.856625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.856660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.856679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.866357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.436 [2024-07-24 20:51:51.866403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.436 [2024-07-24 20:51:51.866419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.436 [2024-07-24 20:51:51.875392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.875423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.875440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.884713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.884749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.884768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.894377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.894410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.894428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.904164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.904205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.904225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.914082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.914116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.914135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.923975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.924010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.924028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.933691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.933727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.933746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.943259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.943315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.943332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.952758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.952793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.952813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.960064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.960098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.960117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.967909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.967943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.967961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.976000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.976034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.976053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.984428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.984459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.984476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:51.992934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:51.992968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:51.992987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.437 [2024-07-24 20:51:52.000858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.437 [2024-07-24 20:51:52.000891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.437 [2024-07-24 20:51:52.000910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.725 [2024-07-24 20:51:52.008951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.725 [2024-07-24 20:51:52.008987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.725 [2024-07-24 20:51:52.009006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.725 [2024-07-24 20:51:52.016956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.725 [2024-07-24 20:51:52.016992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.725 [2024-07-24 20:51:52.017011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.725 [2024-07-24 20:51:52.025125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.725 [2024-07-24 20:51:52.025159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.725 [2024-07-24 20:51:52.025178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.725 [2024-07-24 20:51:52.032906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.725 [2024-07-24 20:51:52.032940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.725 [2024-07-24 20:51:52.032958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.725 [2024-07-24 20:51:52.040869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.040903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.040923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.048695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.048730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.048755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.056834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.056869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.056889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.065170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.065205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.065224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.072983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.073016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.073034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.081011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.081045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.081064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.089594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.089629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.089647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.097593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.097627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.097646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.105819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.105853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.105872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.113749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.113782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.113801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.121752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.121787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.121805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.130055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.130090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.130110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.137883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.137918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.137936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.145994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.146029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.146047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.154221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.154266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.154301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.162522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.162570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.162589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.170713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.170747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.170766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.178118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.178152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.178170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.186099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.186132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.186157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.194258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.194305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.194322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.202068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.202102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.202121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.210173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.210206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.210225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.218204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.218237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.218266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.726 [2024-07-24 20:51:52.226256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.726 [2024-07-24 20:51:52.226303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.726 [2024-07-24 20:51:52.226320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.727 [2024-07-24 20:51:52.234076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.727 [2024-07-24 20:51:52.234110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.727 [2024-07-24 20:51:52.234128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.727 [2024-07-24 20:51:52.242069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.727 [2024-07-24 20:51:52.242103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.727 [2024-07-24 20:51:52.242121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.727 [2024-07-24 20:51:52.250210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.727 [2024-07-24 20:51:52.250251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.727 [2024-07-24 20:51:52.250272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.727 [2024-07-24 20:51:52.258159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.727 [2024-07-24 20:51:52.258199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.727 [2024-07-24 20:51:52.258218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.727 [2024-07-24 20:51:52.266253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.727 [2024-07-24 20:51:52.266301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.727 [2024-07-24 20:51:52.266318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.274439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.274470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.274487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.282632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.282666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.282684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.290917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.290951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.290969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.298817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.298850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.298868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.306358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.306387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.306420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.314349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.314394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.314411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.320851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.320884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.320903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.329286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.329317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.329334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.337678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.337712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.337731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.345303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.345334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.345352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.986 [2024-07-24 20:51:52.352773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1abd290) 00:23:56.986 [2024-07-24 20:51:52.352808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.986 [2024-07-24 20:51:52.352828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.986 00:23:56.986 Latency(us) 00:23:56.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.986 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:56.986 nvme0n1 : 2.00 4268.29 533.54 0.00 0.00 3743.00 740.31 10000.31 00:23:56.986 =================================================================================================================== 00:23:56.986 Total : 4268.29 533.54 0.00 0.00 3743.00 740.31 10000.31 00:23:56.986 0 00:23:56.986 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:56.986 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:56.986 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:56.986 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:56.986 | .driver_specific 00:23:56.986 | .nvme_error 00:23:56.986 | .status_code 00:23:56.986 | .command_transient_transport_error' 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 275 > 0 )) 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1687805 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1687805 ']' 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1687805 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687805 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687805' 00:23:57.244 killing process with pid 1687805 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1687805 00:23:57.244 Received shutdown signal, test time was about 2.000000 seconds 00:23:57.244 00:23:57.244 Latency(us) 00:23:57.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.244 =================================================================================================================== 00:23:57.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.244 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1687805 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1688218 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1688218 /var/tmp/bperf.sock 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1688218 ']' 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:57.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.503 20:51:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:57.503 [2024-07-24 20:51:52.974374] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:23:57.503 [2024-07-24 20:51:52.974456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688218 ] 00:23:57.503 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.503 [2024-07-24 20:51:53.038475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.760 [2024-07-24 20:51:53.149417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.760 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.760 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:57.760 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:57.760 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:58.017 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:58.017 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.017 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:58.017 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.017 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.017 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.582 nvme0n1 00:23:58.582 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:58.582 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.582 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:58.582 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.582 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:58.582 20:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:58.582 Running I/O for 2 seconds... 00:23:58.582 [2024-07-24 20:51:54.051033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.582 [2024-07-24 20:51:54.051341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.582 [2024-07-24 20:51:54.051380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.582 [2024-07-24 20:51:54.065529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.582 [2024-07-24 20:51:54.065840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.582 [2024-07-24 20:51:54.065874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.582 [2024-07-24 20:51:54.080060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.582 [2024-07-24 20:51:54.080382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.582 [2024-07-24 20:51:54.080412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.582 [2024-07-24 20:51:54.094517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.582 [2024-07-24 20:51:54.094803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.582 [2024-07-24 20:51:54.094836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.582 [2024-07-24 20:51:54.108999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.582 [2024-07-24 20:51:54.109316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.582 [2024-07-24 20:51:54.109344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.582 [2024-07-24 20:51:54.123217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.582 [2024-07-24 20:51:54.123557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.582 [2024-07-24 20:51:54.123596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.582 [2024-07-24 20:51:54.137503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.582 [2024-07-24 20:51:54.137811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.582 [2024-07-24 20:51:54.137841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.151412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.151751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.151783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.165628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.165893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.165923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.179828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.180123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.180153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.194097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.194413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.194440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.208268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.208597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.208635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.222355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.222675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.222706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.236460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.236760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.236796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.250687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.250982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.251014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.264903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.265197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.265228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.279189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.279598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.279629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.293467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.293768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.293798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.307701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.307963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.307995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.321829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.322120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.322152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.336033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.336316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.336345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.350168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.350451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.350479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.364315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.839 [2024-07-24 20:51:54.364584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.839 [2024-07-24 20:51:54.364613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.839 [2024-07-24 20:51:54.378557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.840 [2024-07-24 20:51:54.378863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.840 [2024-07-24 20:51:54.378894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.840 [2024-07-24 20:51:54.392733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:58.840 [2024-07-24 20:51:54.392995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.840 [2024-07-24 20:51:54.393026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.406663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.406925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.406955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.420779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.421041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.421071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.434822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.435126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.435156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.448868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.449171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.449200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.462427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.462691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.462717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.475959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.476320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.476347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.489811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.490110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.490137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.503520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.503783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.503811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.517228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.517477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.517504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.531007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.531326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.531353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.544807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.545147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.545174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.558368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.558680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.097 [2024-07-24 20:51:54.558708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.097 [2024-07-24 20:51:54.572101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.097 [2024-07-24 20:51:54.572362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.098 [2024-07-24 20:51:54.572391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.098 [2024-07-24 20:51:54.585828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.098 [2024-07-24 20:51:54.586180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.098 [2024-07-24 20:51:54.586208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.098 [2024-07-24 20:51:54.599425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.098 [2024-07-24 20:51:54.599782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.098 [2024-07-24 20:51:54.599825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.098 [2024-07-24 20:51:54.613135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.098 [2024-07-24 20:51:54.613412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.098 [2024-07-24 20:51:54.613440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.098 [2024-07-24 20:51:54.626712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.098 [2024-07-24 20:51:54.627032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.098 [2024-07-24 20:51:54.627059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.098 [2024-07-24 20:51:54.640407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.098 [2024-07-24 20:51:54.640758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.098 [2024-07-24 20:51:54.640800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.098 [2024-07-24 20:51:54.654111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.098 [2024-07-24 20:51:54.654418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.098 [2024-07-24 20:51:54.654446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.667521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.667839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.667865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.681124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.681401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.681428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.694743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.695067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.695093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.708318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.708589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.708617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.721690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.721955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.721988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.735310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.735547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.735574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.748919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.749185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.749211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.762469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.762732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.762765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.776083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.776359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.776386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.789692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.789954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.789981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.803281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.803518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.803544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.816771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.817034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.817062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.829950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.830214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.830250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.843511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.356 [2024-07-24 20:51:54.843885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.356 [2024-07-24 20:51:54.843914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.356 [2024-07-24 20:51:54.857393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.357 [2024-07-24 20:51:54.857657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.357 [2024-07-24 20:51:54.857684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.357 [2024-07-24 20:51:54.870862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.357 [2024-07-24 20:51:54.871165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.357 [2024-07-24 20:51:54.871194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.357 [2024-07-24 20:51:54.884404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.357 [2024-07-24 20:51:54.884665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.357 [2024-07-24 20:51:54.884692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.357 [2024-07-24 20:51:54.898148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.357 [2024-07-24 20:51:54.898394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.357 [2024-07-24 20:51:54.898421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.357 [2024-07-24 20:51:54.911712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.357 [2024-07-24 20:51:54.912063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.357 [2024-07-24 20:51:54.912090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:54.925127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:54.925373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:54.925400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:54.938661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:54.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:54.938953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:54.952231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:54.952515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:54.952542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:54.965872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:54.966136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:54.966163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:54.979520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:54.979888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:54.979914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:54.993052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:54.993356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:54.993384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.006603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.006874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.006901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.020207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.020449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.020476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.033701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.034050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.034077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.047726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.047989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.048016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.061352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.061585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.061610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.074606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.074872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.074907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.087930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.088195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.088224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.101398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.101662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.101690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.114973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.115254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.115282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.128568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.128830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.128856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.142077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.142349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.142376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.155746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.156007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.156034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.615 [2024-07-24 20:51:55.169299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.615 [2024-07-24 20:51:55.169601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.615 [2024-07-24 20:51:55.169628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.873 [2024-07-24 20:51:55.182666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.873 [2024-07-24 20:51:55.182928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.873 [2024-07-24 20:51:55.182955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.873 [2024-07-24 20:51:55.196147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.873 [2024-07-24 20:51:55.196392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.873 [2024-07-24 20:51:55.196426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.873 [2024-07-24 20:51:55.209676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.873 [2024-07-24 20:51:55.209940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.873 [2024-07-24 20:51:55.209967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.873 [2024-07-24 20:51:55.223217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.223465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.223492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.236751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.237014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.237041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.250386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.250678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.250705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.264013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.264253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.264290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.277393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.277645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.277687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.291152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.291398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.291425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.304730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.304995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.305023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.318564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.318832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.318859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.332337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.332605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.332632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.345203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.345457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.345487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.358792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.359056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.359085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.372675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.372939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.372966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.386427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.386692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.386720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.400145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.400422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.400449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.414050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.414325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.414353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.874 [2024-07-24 20:51:55.427699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:23:59.874 [2024-07-24 20:51:55.427962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.874 [2024-07-24 20:51:55.427991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.441070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.441347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.441377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.454807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.455067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.455097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.468715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.468978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.469008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.482829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.483094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.483124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.497111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.497403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.497430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.511283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.511565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.511609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.525601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.525895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.525925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.539773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.540033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.540063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.554081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.554412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.554445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.568333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.568594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.568638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.582381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.582645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.582675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.596386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.596722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.596754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.610456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.610721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.610752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.624634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.624938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.624969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.638781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.639075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.639105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.653032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.653344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.653371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.667352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.667653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.667684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.681554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.681862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.681892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.133 [2024-07-24 20:51:55.695482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.133 [2024-07-24 20:51:55.695752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.133 [2024-07-24 20:51:55.695779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.709420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.709715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.723563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.723837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.723867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.737576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.737872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.737902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.751444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.751741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.751772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.765652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.765945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.765975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.779751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.780011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.780041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.793805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.794098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.794128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.807966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.808268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.808313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.822177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.822537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.822565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.836331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.836698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.836728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.850365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.850712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.864647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.864908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.864939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.878860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.879156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.879187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.893060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.893370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.397 [2024-07-24 20:51:55.893397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.397 [2024-07-24 20:51:55.907349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.397 [2024-07-24 20:51:55.907753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.398 [2024-07-24 20:51:55.907782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.398 [2024-07-24 20:51:55.921627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.398 [2024-07-24 20:51:55.921889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.398 [2024-07-24 20:51:55.921918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.398 [2024-07-24 20:51:55.935843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.398 [2024-07-24 20:51:55.936102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.398 [2024-07-24 20:51:55.936132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.398 [2024-07-24 20:51:55.949881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.398 [2024-07-24 20:51:55.950152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.398 [2024-07-24 20:51:55.950182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.657 [2024-07-24 20:51:55.963861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.657 [2024-07-24 20:51:55.964154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.657 [2024-07-24 20:51:55.964183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.657 [2024-07-24 20:51:55.978119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.657 [2024-07-24 20:51:55.978490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.657 [2024-07-24 20:51:55.978517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.657 [2024-07-24 20:51:55.992167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.657 [2024-07-24 20:51:55.992496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.657 [2024-07-24 20:51:55.992523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.657 [2024-07-24 20:51:56.006372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.657 [2024-07-24 20:51:56.006646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.657 [2024-07-24 20:51:56.006676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.657 [2024-07-24 20:51:56.020433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.657 [2024-07-24 20:51:56.020727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.657 [2024-07-24 20:51:56.020757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.657 [2024-07-24 20:51:56.034480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18e14f0) with pdu=0x2000190fd208 00:24:00.657 [2024-07-24 20:51:56.034786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.657 [2024-07-24 20:51:56.034816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.657 00:24:00.657 Latency(us) 00:24:00.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.657 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:00.657 nvme0n1 : 2.01 18330.17 71.60 0.00 0.00 6966.10 6189.51 17767.54 00:24:00.657 =================================================================================================================== 00:24:00.657 Total : 18330.17 71.60 0.00 0.00 6966.10 6189.51 17767.54 00:24:00.657 0 00:24:00.657 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:00.657 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:00.657 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:00.657 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:00.657 | .driver_specific 00:24:00.657 | .nvme_error 00:24:00.657 | .status_code 00:24:00.657 | .command_transient_transport_error' 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1688218 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1688218 ']' 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1688218 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688218 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688218' 00:24:00.915 killing process with pid 1688218 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1688218 00:24:00.915 Received shutdown signal, test time was about 2.000000 seconds 00:24:00.915 00:24:00.915 Latency(us) 00:24:00.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.915 =================================================================================================================== 00:24:00.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.915 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1688218 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1688739 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1688739 /var/tmp/bperf.sock 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1688739 ']' 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.173 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.173 [2024-07-24 20:51:56.638082] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:01.173 [2024-07-24 20:51:56.638161] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688739 ] 00:24:01.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:01.173 Zero copy mechanism will not be used. 00:24:01.173 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.173 [2024-07-24 20:51:56.698934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.430 [2024-07-24 20:51:56.806583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.430 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:01.430 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:01.430 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:01.430 20:51:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:01.686 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:01.686 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.686 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.686 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.686 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.686 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.942 nvme0n1 00:24:02.199 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:02.199 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.199 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:02.199 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.199 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:02.199 20:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.199 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:02.199 Zero copy mechanism will not be used. 00:24:02.199 Running I/O for 2 seconds... 00:24:02.199 [2024-07-24 20:51:57.639508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.199 [2024-07-24 20:51:57.639930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.199 [2024-07-24 20:51:57.639970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.199 [2024-07-24 20:51:57.648422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.199 [2024-07-24 20:51:57.648789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.199 [2024-07-24 20:51:57.648822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.199 [2024-07-24 20:51:57.657286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.199 [2024-07-24 20:51:57.657648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.199 [2024-07-24 20:51:57.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.199 [2024-07-24 20:51:57.666433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.199 [2024-07-24 20:51:57.666792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.199 [2024-07-24 20:51:57.666825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.199 [2024-07-24 20:51:57.674785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.199 [2024-07-24 20:51:57.675124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.199 [2024-07-24 20:51:57.675157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.199 [2024-07-24 20:51:57.683754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.199 [2024-07-24 20:51:57.684120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.199 [2024-07-24 20:51:57.684152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.199 [2024-07-24 20:51:57.692685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.693042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.693074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.700914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.701277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.701319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.709472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.709817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.709849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.718345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.718678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.718708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.727230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.727581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.727612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.735697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.736024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.736055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.744233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.744580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.744612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.750810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.751113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.751141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.757423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.757741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.757769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.200 [2024-07-24 20:51:57.764505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.200 [2024-07-24 20:51:57.764826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.200 [2024-07-24 20:51:57.764854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.458 [2024-07-24 20:51:57.771398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.458 [2024-07-24 20:51:57.771713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.458 [2024-07-24 20:51:57.771741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.458 [2024-07-24 20:51:57.778103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.458 [2024-07-24 20:51:57.778482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.458 [2024-07-24 20:51:57.778531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.458 [2024-07-24 20:51:57.785228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.458 [2024-07-24 20:51:57.785561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.458 [2024-07-24 20:51:57.785607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.458 [2024-07-24 20:51:57.792345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.458 [2024-07-24 20:51:57.792680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.458 [2024-07-24 20:51:57.792708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.458 [2024-07-24 20:51:57.799328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.458 [2024-07-24 20:51:57.799676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.799703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.806072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.806412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.806441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.812497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.812590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.812627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.818864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.819153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.819182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.825289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.825576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.825604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.831757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.832042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.832070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.838038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.838340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.838369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.844425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.844722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.844750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.850757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.851072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.851100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.857459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.857626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.857653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.863643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.863949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.863977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.870152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.870454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.870483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.876231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.876529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.876574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.882527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.882827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.882855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.888986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.889281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.889310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.895493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.895808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.895837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.901861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.902148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.902178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.908185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.908481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.908509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.914277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.914582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.914613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.920921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.921209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.921237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.927163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.927482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.933412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.933700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.933728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.939819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.940109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.940136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.946593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.946932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.954169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.954591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.954619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.962385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.962730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.962757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.970405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.970738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.459 [2024-07-24 20:51:57.970765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.459 [2024-07-24 20:51:57.978530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.459 [2024-07-24 20:51:57.978939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.460 [2024-07-24 20:51:57.978966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.460 [2024-07-24 20:51:57.986545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.460 [2024-07-24 20:51:57.986894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.460 [2024-07-24 20:51:57.986922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.460 [2024-07-24 20:51:57.993959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.460 [2024-07-24 20:51:57.994256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.460 [2024-07-24 20:51:57.994284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.460 [2024-07-24 20:51:58.001187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.460 [2024-07-24 20:51:58.001574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.460 [2024-07-24 20:51:58.001602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.460 [2024-07-24 20:51:58.009447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.460 [2024-07-24 20:51:58.009737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.460 [2024-07-24 20:51:58.009767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.460 [2024-07-24 20:51:58.016908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.460 [2024-07-24 20:51:58.017224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.460 [2024-07-24 20:51:58.017262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.024681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.025083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.025110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.032768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.033160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.033188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.040202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.040576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.040604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.046229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.046542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.046570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.053153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.053447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.053476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.060291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.060594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.060623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.067171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.067462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.067490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.073837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.074129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.074158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.080422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.080711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.080739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.087115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.087413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.087441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.094103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.094401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.094429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.102200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.102573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.102601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.110595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.110975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.111002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.118757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.119102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.119130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.126808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.127165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.127193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.135037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.135467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.135495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.143459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.143861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.143890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.150378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.150719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.150750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.157320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.157629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.163732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.164101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.164130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.170561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.170954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.170996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.178597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.178904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.178932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.185118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.185415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.718 [2024-07-24 20:51:58.185443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.718 [2024-07-24 20:51:58.191680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.718 [2024-07-24 20:51:58.191968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.191996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.197466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.197755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.197783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.203877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.204337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.204366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.212049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.212451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.212479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.219737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.220128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.220154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.227953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.228351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.228380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.235051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.235348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.235382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.243143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.243498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.243527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.251258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.251681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.251708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.258754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.259026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.259054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.265606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.265888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.265923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.273260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.273581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.273609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.719 [2024-07-24 20:51:58.279853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.719 [2024-07-24 20:51:58.280163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.719 [2024-07-24 20:51:58.280208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.286063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.286352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.286381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.292678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.292962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.292989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.299497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.299773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.299800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.306007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.306291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.306320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.311863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.312189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.312217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.317866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.318141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.318169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.324211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.324503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.324531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.330304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.330581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.330609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.336381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.336658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.336686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.342458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.342735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.342762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.977 [2024-07-24 20:51:58.348576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.977 [2024-07-24 20:51:58.348853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.977 [2024-07-24 20:51:58.348882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.354469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.354745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.354773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.360435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.360709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.360737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.366560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.366832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.366860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.372469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.372743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.372770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.378403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.378679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.378706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.384470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.384761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.384789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.390927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.391209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.391237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.396908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.397183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.397211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.402921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.403212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.403249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.409087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.409373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.409403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.415360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.415639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.415667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.421550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.421839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.421868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.427503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.427778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.433755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.434031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.434059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.439875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.440182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.440210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.445741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.446016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.446043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.451832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.452111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.452139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.457870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.458144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.458171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.463927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.464216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.464251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.469879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.470139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.470167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.475883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.476142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.476171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.481823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.482087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.482115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.487928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.488200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.488228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.493454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.493755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.493783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.499075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.499376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.499403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.505794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.978 [2024-07-24 20:51:58.506125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.978 [2024-07-24 20:51:58.506154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:02.978 [2024-07-24 20:51:58.513402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.979 [2024-07-24 20:51:58.513709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.979 [2024-07-24 20:51:58.513737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:02.979 [2024-07-24 20:51:58.520882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.979 [2024-07-24 20:51:58.521142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.979 [2024-07-24 20:51:58.521170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.979 [2024-07-24 20:51:58.528355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.979 [2024-07-24 20:51:58.528662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.979 [2024-07-24 20:51:58.528690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:02.979 [2024-07-24 20:51:58.536162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:02.979 [2024-07-24 20:51:58.536537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.979 [2024-07-24 20:51:58.536565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.237 [2024-07-24 20:51:58.543843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.237 [2024-07-24 20:51:58.544129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.237 [2024-07-24 20:51:58.544158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.237 [2024-07-24 20:51:58.551498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.237 [2024-07-24 20:51:58.551819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.237 [2024-07-24 20:51:58.551846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.237 [2024-07-24 20:51:58.559367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.237 [2024-07-24 20:51:58.559667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.237 [2024-07-24 20:51:58.559695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.237 [2024-07-24 20:51:58.566976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.237 [2024-07-24 20:51:58.567332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.237 [2024-07-24 20:51:58.567360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.574433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.574799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.574826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.581876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.582152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.582180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.588942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.589180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.589208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.596581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.596857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.596885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.604457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.604787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.604822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.612298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.612626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.612655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.619942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.620330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.620358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.627799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.628117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.628144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.635822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.636193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.636224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.643454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.643821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.643850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.651516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.651898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.651926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.659275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.659643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.659674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.667486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.667898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.667928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.675645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.676008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.676037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.683443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.683836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.683866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.691710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.691983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.692013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.699786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.700103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.700132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.707690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.708078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.708108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.715750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.716130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.716158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.723997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.724352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.724380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.731653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.732009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.732038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.739821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.740192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.740228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.747654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.747990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.748020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.755442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.755817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.755846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.763741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.764130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.764159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.771818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.772197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.238 [2024-07-24 20:51:58.772226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.238 [2024-07-24 20:51:58.779765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.238 [2024-07-24 20:51:58.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.239 [2024-07-24 20:51:58.780184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.239 [2024-07-24 20:51:58.786843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.239 [2024-07-24 20:51:58.787130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.239 [2024-07-24 20:51:58.787159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.239 [2024-07-24 20:51:58.792841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.239 [2024-07-24 20:51:58.793144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.239 [2024-07-24 20:51:58.793173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.239 [2024-07-24 20:51:58.799584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.239 [2024-07-24 20:51:58.799873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.239 [2024-07-24 20:51:58.799902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.805867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.806146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.806175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.812785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.813175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.813204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.820438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.820824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.820853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.828641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.828985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.829014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.836136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.836402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.836431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.842073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.842419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.842446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.848029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.848297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.848324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.853807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.854114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.854142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.860434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.860766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.860794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.867257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.867576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.867604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.874541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.874937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.874965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.881705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.882011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.882040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.889451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.889821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.889849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.897032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.897348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.496 [2024-07-24 20:51:58.897376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.496 [2024-07-24 20:51:58.904536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.496 [2024-07-24 20:51:58.904888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.904916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.911901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.912308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.912337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.919812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.920107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.920137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.927614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.927958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.927992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.935320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.935663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.935691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.943168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.943554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.943582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.950957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.951365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.951393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.958787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.959086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.959115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.966511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.966891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.966918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.973935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.974283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.974312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.980853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.981205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.981232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.988282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.988658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.988686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:58.995771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:58.996044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:58.996071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.003270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.003573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.003601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.010833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.011218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.011254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.018584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.018902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.018930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.025688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.026034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.026063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.033535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.033915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.033943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.041269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.041614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.041644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.049310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.049671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.049699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.497 [2024-07-24 20:51:59.056958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.497 [2024-07-24 20:51:59.057329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.497 [2024-07-24 20:51:59.057357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.064957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.065367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.065396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.073026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.073406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.073434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.080831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.081098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.081127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.087408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.087693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.087723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.093689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.093976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.094005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.099472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.099758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.099786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.105125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.105413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.105441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.110472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.110735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.110762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.116231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.116508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.116542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.122263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.122528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.122556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.128501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.128763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.128790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.134787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.135075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.135104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.140834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.141114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.141142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.146632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.146893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.146920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.152731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.152990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.153034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.158718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.158974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.754 [2024-07-24 20:51:59.159002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.754 [2024-07-24 20:51:59.164788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.754 [2024-07-24 20:51:59.165068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.165097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.170667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.170925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.170955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.176670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.176948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.176976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.182738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.182998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.183026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.188572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.188832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.188861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.194661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.194929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.194958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.200548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.200808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.200836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.206733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.206992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.207020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.212850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.213127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.213155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.218604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.218863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.218897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.224464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.224726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.224753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.230594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.230869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.230898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.236819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.237080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.237109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.242864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.243124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.243152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.248843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.249103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.249131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.254613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.254873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.254901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.260504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.260790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.260818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.266468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.266752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.266780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.272467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.272731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.272759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.278738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.279014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.279042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.285805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.286169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.286197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.293661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.294019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.294047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.300408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.300673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.300701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.306532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.306792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.306820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.312210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.312491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.312523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.755 [2024-07-24 20:51:59.318163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:03.755 [2024-07-24 20:51:59.318428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.755 [2024-07-24 20:51:59.318457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.323872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.324130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.324158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.329980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.330277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.330314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.336115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.336392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.336420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.341921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.342180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.342208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.347754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.348016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.348044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.353786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.354047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.354075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.359778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.360056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.360084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.366362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.366621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.366649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.373887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.374258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.374286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.381603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.381954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.381991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.389460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.389783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.389811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.397047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.397420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.397449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.404843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.405239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.405275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.412662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.413010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.413038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.420679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.421021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.421050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.428663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.429007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.429038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.436099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.436413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.436443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.443342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.443650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.451195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.451501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.451530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.458652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.458914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.458943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.466257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.466573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.466600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.473489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.473787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.473814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.479386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.479650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.479679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.485401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.485662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.485690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.490916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.491197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.491225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.496482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.496746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.496774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.502293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.502554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.502582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.508360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.508619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.508648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.514345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.514611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.514638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.520373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.520634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.520664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.526317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.526580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.526610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.532425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.532685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.532714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.538461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.538724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.538753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.544552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.544815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.544844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.550562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.550837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.550868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.556479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.556742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.556776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.562197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.562467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.562495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.568154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.568448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.568477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.013 [2024-07-24 20:51:59.574176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.013 [2024-07-24 20:51:59.574437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.013 [2024-07-24 20:51:59.574465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.580186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.580494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.580523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.586084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.586351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.586380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.592069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.592337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.592365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.598137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.598419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.598447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.604195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.604487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.604515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.610276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.610544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.610573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.616117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.616407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.616437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.622041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.622311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.622340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.628057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.628325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.628353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.270 [2024-07-24 20:51:59.633958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1716af0) with pdu=0x2000190fef90 00:24:04.270 [2024-07-24 20:51:59.634160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.270 [2024-07-24 20:51:59.634188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.270 00:24:04.270 Latency(us) 00:24:04.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.270 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:04.270 nvme0n1 : 2.00 4493.39 561.67 0.00 0.00 3552.64 2524.35 9126.49 00:24:04.270 =================================================================================================================== 00:24:04.271 Total : 4493.39 561.67 0.00 0.00 3552.64 2524.35 9126.49 00:24:04.271 0 00:24:04.271 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:04.271 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:04.271 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:04.271 | .driver_specific 00:24:04.271 | .nvme_error 00:24:04.271 | .status_code 00:24:04.271 | .command_transient_transport_error' 00:24:04.271 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 290 > 0 )) 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1688739 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1688739 ']' 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1688739 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688739 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688739' 00:24:04.527 killing process with pid 1688739 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1688739 00:24:04.527 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.527 00:24:04.527 Latency(us) 00:24:04.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.527 =================================================================================================================== 00:24:04.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.527 20:51:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1688739 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1687258 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1687258 ']' 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1687258 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687258 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687258' 00:24:04.784 killing process with pid 1687258 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1687258 00:24:04.784 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1687258 00:24:05.131 00:24:05.131 real 0m15.583s 00:24:05.131 user 0m30.999s 00:24:05.131 sys 0m4.202s 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:05.131 ************************************ 00:24:05.131 END TEST nvmf_digest_error 00:24:05.131 ************************************ 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.131 rmmod nvme_tcp 00:24:05.131 rmmod nvme_fabrics 00:24:05.131 rmmod nvme_keyring 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1687258 ']' 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1687258 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1687258 ']' 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1687258 00:24:05.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1687258) - No such process 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1687258 is not found' 00:24:05.131 Process with pid 1687258 is not found 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.131 20:52:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.660 00:24:07.660 real 0m37.763s 00:24:07.660 user 1m6.933s 00:24:07.660 sys 0m10.066s 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:07.660 ************************************ 00:24:07.660 END TEST nvmf_digest 00:24:07.660 ************************************ 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.660 ************************************ 00:24:07.660 START TEST nvmf_bdevperf 00:24:07.660 ************************************ 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:07.660 * Looking for test storage... 00:24:07.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.660 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.661 20:52:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.564 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:24:09.565 00:24:09.565 --- 10.0.0.2 ping statistics --- 00:24:09.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.565 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:24:09.565 00:24:09.565 --- 10.0.0.1 ping statistics --- 00:24:09.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.565 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1691087 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1691087 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1691087 ']' 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.565 20:52:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.565 [2024-07-24 20:52:05.030654] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:09.565 [2024-07-24 20:52:05.030740] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.565 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.565 [2024-07-24 20:52:05.108145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.823 [2024-07-24 20:52:05.230529] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.823 [2024-07-24 20:52:05.230596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.823 [2024-07-24 20:52:05.230612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.823 [2024-07-24 20:52:05.230625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.823 [2024-07-24 20:52:05.230637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.823 [2024-07-24 20:52:05.230717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.823 [2024-07-24 20:52:05.230775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.823 [2024-07-24 20:52:05.230779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.755 [2024-07-24 20:52:06.053603] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.755 Malloc0 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.755 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.756 [2024-07-24 20:52:06.117923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.756 { 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme$subsystem", 00:24:10.756 "trtype": "$TEST_TRANSPORT", 00:24:10.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "$NVMF_PORT", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.756 "hdgst": ${hdgst:-false}, 00:24:10.756 "ddgst": ${ddgst:-false} 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 } 00:24:10.756 EOF 00:24:10.756 )") 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:10.756 20:52:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme1", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 }' 00:24:10.756 [2024-07-24 20:52:06.167914] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:10.756 [2024-07-24 20:52:06.167993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691240 ] 00:24:10.756 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.756 [2024-07-24 20:52:06.228424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.013 [2024-07-24 20:52:06.338697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.273 Running I/O for 1 seconds... 00:24:12.208 00:24:12.208 Latency(us) 00:24:12.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.208 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:12.208 Verification LBA range: start 0x0 length 0x4000 00:24:12.208 Nvme1n1 : 1.02 8747.38 34.17 0.00 0.00 14570.19 2730.67 15437.37 00:24:12.208 =================================================================================================================== 00:24:12.208 Total : 8747.38 34.17 0.00 0.00 14570.19 2730.67 15437.37 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1691504 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.467 { 00:24:12.467 "params": { 00:24:12.467 "name": "Nvme$subsystem", 00:24:12.467 "trtype": "$TEST_TRANSPORT", 00:24:12.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.467 "adrfam": "ipv4", 00:24:12.467 "trsvcid": "$NVMF_PORT", 00:24:12.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.467 "hdgst": ${hdgst:-false}, 00:24:12.467 "ddgst": ${ddgst:-false} 00:24:12.467 }, 00:24:12.467 "method": "bdev_nvme_attach_controller" 00:24:12.467 } 00:24:12.467 EOF 00:24:12.467 )") 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:12.467 20:52:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:12.467 "params": { 00:24:12.467 "name": "Nvme1", 00:24:12.467 "trtype": "tcp", 00:24:12.467 "traddr": "10.0.0.2", 00:24:12.467 "adrfam": "ipv4", 00:24:12.467 "trsvcid": "4420", 00:24:12.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.467 "hdgst": false, 00:24:12.467 "ddgst": false 00:24:12.467 }, 00:24:12.467 "method": "bdev_nvme_attach_controller" 00:24:12.467 }' 00:24:12.467 [2024-07-24 20:52:07.998014] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:12.467 [2024-07-24 20:52:07.998094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691504 ] 00:24:12.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.725 [2024-07-24 20:52:08.057935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.725 [2024-07-24 20:52:08.166579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.035 Running I/O for 15 seconds... 00:24:15.585 20:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1691087 00:24:15.585 20:52:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:15.585 [2024-07-24 20:52:10.969149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.969973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.969990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.585 [2024-07-24 20:52:10.970448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.585 [2024-07-24 20:52:10.970461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.970978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.970994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.586 [2024-07-24 20:52:10.971362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.586 [2024-07-24 20:52:10.971396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.586 [2024-07-24 20:52:10.971425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.586 [2024-07-24 20:52:10.971454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.586 [2024-07-24 20:52:10.971485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.586 [2024-07-24 20:52:10.971766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.586 [2024-07-24 20:52:10.971781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.971802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.971818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.971834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.971849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.971866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.971881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.971897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.971912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.971928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.971943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.971960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.971975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.971991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.587 [2024-07-24 20:52:10.972765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.972978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.972994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.973010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.973029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.973047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.587 [2024-07-24 20:52:10.973062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.587 [2024-07-24 20:52:10.973079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.588 [2024-07-24 20:52:10.973420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8830 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:10.973454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:15.588 [2024-07-24 20:52:10.973465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:15.588 [2024-07-24 20:52:10.973476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44576 len:8 PRP1 0x0 PRP2 0x0 00:24:15.588 [2024-07-24 20:52:10.973497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973584] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d8830 was disconnected and freed. reset controller. 00:24:15.588 [2024-07-24 20:52:10.973664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.588 [2024-07-24 20:52:10.973687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.588 [2024-07-24 20:52:10.973718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.588 [2024-07-24 20:52:10.973748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.588 [2024-07-24 20:52:10.973776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.588 [2024-07-24 20:52:10.973790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:10.977595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.588 [2024-07-24 20:52:10.977635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.588 [2024-07-24 20:52:10.978317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.588 [2024-07-24 20:52:10.978347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.588 [2024-07-24 20:52:10.978363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:10.978600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.588 [2024-07-24 20:52:10.978841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.588 [2024-07-24 20:52:10.978863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.588 [2024-07-24 20:52:10.978880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.588 [2024-07-24 20:52:10.982489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.588 [2024-07-24 20:52:10.991805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.588 [2024-07-24 20:52:10.992229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.588 [2024-07-24 20:52:10.992268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.588 [2024-07-24 20:52:10.992286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:10.992538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.588 [2024-07-24 20:52:10.992786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.588 [2024-07-24 20:52:10.992809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.588 [2024-07-24 20:52:10.992824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.588 [2024-07-24 20:52:10.996395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.588 [2024-07-24 20:52:11.005657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.588 [2024-07-24 20:52:11.006080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.588 [2024-07-24 20:52:11.006111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.588 [2024-07-24 20:52:11.006128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:11.006377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.588 [2024-07-24 20:52:11.006619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.588 [2024-07-24 20:52:11.006642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.588 [2024-07-24 20:52:11.006657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.588 [2024-07-24 20:52:11.010221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.588 [2024-07-24 20:52:11.019692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.588 [2024-07-24 20:52:11.020084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.588 [2024-07-24 20:52:11.020115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.588 [2024-07-24 20:52:11.020132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:11.020381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.588 [2024-07-24 20:52:11.020623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.588 [2024-07-24 20:52:11.020646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.588 [2024-07-24 20:52:11.020660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.588 [2024-07-24 20:52:11.024217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.588 [2024-07-24 20:52:11.033691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.588 [2024-07-24 20:52:11.034122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.588 [2024-07-24 20:52:11.034152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.588 [2024-07-24 20:52:11.034169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:11.034417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.588 [2024-07-24 20:52:11.034658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.588 [2024-07-24 20:52:11.034680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.588 [2024-07-24 20:52:11.034701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.588 [2024-07-24 20:52:11.038297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.588 [2024-07-24 20:52:11.047570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.588 [2024-07-24 20:52:11.047996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.588 [2024-07-24 20:52:11.048026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.588 [2024-07-24 20:52:11.048043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.588 [2024-07-24 20:52:11.048291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.048533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.048555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.048571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.589 [2024-07-24 20:52:11.052128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.589 [2024-07-24 20:52:11.061399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.589 [2024-07-24 20:52:11.061799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.589 [2024-07-24 20:52:11.061830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.589 [2024-07-24 20:52:11.061847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.589 [2024-07-24 20:52:11.062084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.062345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.062370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.062385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.589 [2024-07-24 20:52:11.065943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.589 [2024-07-24 20:52:11.075421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.589 [2024-07-24 20:52:11.075835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.589 [2024-07-24 20:52:11.075865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.589 [2024-07-24 20:52:11.075882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.589 [2024-07-24 20:52:11.076119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.076372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.076395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.076410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.589 [2024-07-24 20:52:11.079967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.589 [2024-07-24 20:52:11.089238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.589 [2024-07-24 20:52:11.089670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.589 [2024-07-24 20:52:11.089706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.589 [2024-07-24 20:52:11.089724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.589 [2024-07-24 20:52:11.089961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.090202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.090224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.090239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.589 [2024-07-24 20:52:11.093818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.589 [2024-07-24 20:52:11.103075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.589 [2024-07-24 20:52:11.103472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.589 [2024-07-24 20:52:11.103503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.589 [2024-07-24 20:52:11.103520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.589 [2024-07-24 20:52:11.103757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.103997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.104019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.104034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.589 [2024-07-24 20:52:11.107610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.589 [2024-07-24 20:52:11.117086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.589 [2024-07-24 20:52:11.117459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.589 [2024-07-24 20:52:11.117490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.589 [2024-07-24 20:52:11.117507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.589 [2024-07-24 20:52:11.117744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.117986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.118008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.118023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.589 [2024-07-24 20:52:11.121593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.589 [2024-07-24 20:52:11.131081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.589 [2024-07-24 20:52:11.131475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.589 [2024-07-24 20:52:11.131506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.589 [2024-07-24 20:52:11.131523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.589 [2024-07-24 20:52:11.131760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.132007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.132030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.132045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.589 [2024-07-24 20:52:11.135625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.589 [2024-07-24 20:52:11.145165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.589 [2024-07-24 20:52:11.145594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.589 [2024-07-24 20:52:11.145628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.589 [2024-07-24 20:52:11.145646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.589 [2024-07-24 20:52:11.145884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.589 [2024-07-24 20:52:11.146126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.589 [2024-07-24 20:52:11.146149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.589 [2024-07-24 20:52:11.146164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.150021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.159197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.159600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.159642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.159660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.159897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.160139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.160162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.160176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.163762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.173079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.173476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.173507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.173524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.173762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.174003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.174026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.174041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.177627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.186918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.187309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.187340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.187358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.187595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.187836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.187859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.187874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.191454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.200949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.201363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.201395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.201413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.201650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.201892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.201915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.201930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.205500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.214977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.215400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.215430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.215447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.215684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.215924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.215947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.215962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.219528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.229002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.229417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.229449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.229472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.229709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.229950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.229972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.229987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.233554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.243029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.243428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.243460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.243477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.243715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.243956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.243979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.243994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.247578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.257057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.257479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.257510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.257527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.257764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.258006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.258029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.258044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.261613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.271079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.271527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.271594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.271611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.849 [2024-07-24 20:52:11.271848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.849 [2024-07-24 20:52:11.272089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.849 [2024-07-24 20:52:11.272117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.849 [2024-07-24 20:52:11.272133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.849 [2024-07-24 20:52:11.275697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.849 [2024-07-24 20:52:11.284976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.849 [2024-07-24 20:52:11.285407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.849 [2024-07-24 20:52:11.285437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.849 [2024-07-24 20:52:11.285455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.285692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.285933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.285955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.285970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.289541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.298811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.299195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.299225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.299251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.299491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.299732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.299754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.299769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.303334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.312823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.313236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.313274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.313291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.313527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.313768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.313791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.313805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.317376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.326654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.327042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.327073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.327090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.327340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.327582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.327604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.327620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.331178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.340661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.341067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.341097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.341115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.341365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.341607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.341630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.341645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.345206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.354487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.354905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.354936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.354953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.355189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.355441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.355465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.355479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.359175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.368449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.368858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.368889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.368906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.369148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.369402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.369426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.369441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.372999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.382481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.382906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.382936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.382953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.383190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.383442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.383465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.383480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.387041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.396309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.396726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.396756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.396774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.397011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.397263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.397287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.397301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.850 [2024-07-24 20:52:11.400858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.850 [2024-07-24 20:52:11.410347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.850 [2024-07-24 20:52:11.410786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.850 [2024-07-24 20:52:11.410819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:15.850 [2024-07-24 20:52:11.410836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:15.850 [2024-07-24 20:52:11.411094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:15.850 [2024-07-24 20:52:11.411351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.850 [2024-07-24 20:52:11.411374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.850 [2024-07-24 20:52:11.411395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.109 [2024-07-24 20:52:11.415137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.109 [2024-07-24 20:52:11.424324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.109 [2024-07-24 20:52:11.424749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.109 [2024-07-24 20:52:11.424782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.109 [2024-07-24 20:52:11.424799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.109 [2024-07-24 20:52:11.425036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.109 [2024-07-24 20:52:11.425291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.109 [2024-07-24 20:52:11.425314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.109 [2024-07-24 20:52:11.425329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.109 [2024-07-24 20:52:11.428889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.109 [2024-07-24 20:52:11.438367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.109 [2024-07-24 20:52:11.438790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.109 [2024-07-24 20:52:11.438821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.109 [2024-07-24 20:52:11.438839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.109 [2024-07-24 20:52:11.439077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.109 [2024-07-24 20:52:11.439330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.109 [2024-07-24 20:52:11.439353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.109 [2024-07-24 20:52:11.439369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.109 [2024-07-24 20:52:11.442932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.109 [2024-07-24 20:52:11.452207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.109 [2024-07-24 20:52:11.452635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.109 [2024-07-24 20:52:11.452665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.452683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.452920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.453162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.453184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.453199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.456771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.466028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.466431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.466462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.466479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.466716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.466957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.466979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.466994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.470576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.480063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.480455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.480503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.480739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.480981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.481003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.481018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.484627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.493897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.494296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.494328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.494346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.494584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.494826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.494848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.494863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.498433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.507913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.508308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.508339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.508356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.508598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.508840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.508863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.508877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.512452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.521923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.522332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.522363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.522381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.522617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.522858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.522880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.522895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.526465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.535946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.536350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.536381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.536399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.536637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.536878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.536900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.536915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.540482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.549962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.550376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.550407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.550424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.550661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.550902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.550924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.550945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.554518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.563986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.564355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.564386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.564403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.564640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.564881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.564904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.564919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.568489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.577958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.578349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.578381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.578398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.578636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.110 [2024-07-24 20:52:11.578877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.110 [2024-07-24 20:52:11.578899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.110 [2024-07-24 20:52:11.578914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.110 [2024-07-24 20:52:11.582487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.110 [2024-07-24 20:52:11.591965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.110 [2024-07-24 20:52:11.592360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.110 [2024-07-24 20:52:11.592392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.110 [2024-07-24 20:52:11.592409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.110 [2024-07-24 20:52:11.592647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.111 [2024-07-24 20:52:11.592888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.111 [2024-07-24 20:52:11.592911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.111 [2024-07-24 20:52:11.592926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.111 [2024-07-24 20:52:11.596496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.111 [2024-07-24 20:52:11.605967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.111 [2024-07-24 20:52:11.606360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.111 [2024-07-24 20:52:11.606398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.111 [2024-07-24 20:52:11.606416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.111 [2024-07-24 20:52:11.606654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.111 [2024-07-24 20:52:11.606895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.111 [2024-07-24 20:52:11.606918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.111 [2024-07-24 20:52:11.606932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.111 [2024-07-24 20:52:11.610506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.111 [2024-07-24 20:52:11.619991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.111 [2024-07-24 20:52:11.620415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.111 [2024-07-24 20:52:11.620446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.111 [2024-07-24 20:52:11.620463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.111 [2024-07-24 20:52:11.620700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.111 [2024-07-24 20:52:11.620941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.111 [2024-07-24 20:52:11.620963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.111 [2024-07-24 20:52:11.620978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.111 [2024-07-24 20:52:11.624544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.111 [2024-07-24 20:52:11.634018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.111 [2024-07-24 20:52:11.634444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.111 [2024-07-24 20:52:11.634474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.111 [2024-07-24 20:52:11.634491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.111 [2024-07-24 20:52:11.634728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.111 [2024-07-24 20:52:11.634969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.111 [2024-07-24 20:52:11.634991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.111 [2024-07-24 20:52:11.635006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.111 [2024-07-24 20:52:11.638581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.111 [2024-07-24 20:52:11.647848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.111 [2024-07-24 20:52:11.648257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.111 [2024-07-24 20:52:11.648288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.111 [2024-07-24 20:52:11.648306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.111 [2024-07-24 20:52:11.648553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.111 [2024-07-24 20:52:11.648801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.111 [2024-07-24 20:52:11.648824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.111 [2024-07-24 20:52:11.648839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.111 [2024-07-24 20:52:11.652405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.111 [2024-07-24 20:52:11.661873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.111 [2024-07-24 20:52:11.662291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.111 [2024-07-24 20:52:11.662322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.111 [2024-07-24 20:52:11.662339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.111 [2024-07-24 20:52:11.662576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.111 [2024-07-24 20:52:11.662818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.111 [2024-07-24 20:52:11.662840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.111 [2024-07-24 20:52:11.662855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.111 [2024-07-24 20:52:11.666425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.370 [2024-07-24 20:52:11.675941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.370 [2024-07-24 20:52:11.676366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.370 [2024-07-24 20:52:11.676399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.370 [2024-07-24 20:52:11.676417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.370 [2024-07-24 20:52:11.676655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.370 [2024-07-24 20:52:11.676896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.370 [2024-07-24 20:52:11.676919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.370 [2024-07-24 20:52:11.676934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.370 [2024-07-24 20:52:11.680577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.370 [2024-07-24 20:52:11.689890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.370 [2024-07-24 20:52:11.690254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.370 [2024-07-24 20:52:11.690286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.370 [2024-07-24 20:52:11.690304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.370 [2024-07-24 20:52:11.690541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.370 [2024-07-24 20:52:11.690783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.690805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.690821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.694394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.703860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.704252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.704283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.704300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.704537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.704778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.704801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.704816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.708387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.717864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.718324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.718355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.718373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.718610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.718851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.718874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.718888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.722460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.731727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.732116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.732146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.732163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.732409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.732651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.732674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.732689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.736259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.745576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.745963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.745994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.746017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.746267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.746521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.746544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.746559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.750143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.759419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.759828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.759858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.759875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.760112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.760366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.760389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.760404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.763963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.773442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.773829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.773859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.773876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.774112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.774364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.774387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.774402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.777963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.787461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.787845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.787876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.787893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.788129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.788381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.788410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.788426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.791986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.801488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.801873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.801903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.801920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.802156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.802409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.802433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.802447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.806035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.815525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.815936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.815966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.815983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.816220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.816468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.816491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.371 [2024-07-24 20:52:11.816506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.371 [2024-07-24 20:52:11.820068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.371 [2024-07-24 20:52:11.829554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.371 [2024-07-24 20:52:11.829935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.371 [2024-07-24 20:52:11.829965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.371 [2024-07-24 20:52:11.829982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.371 [2024-07-24 20:52:11.830218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.371 [2024-07-24 20:52:11.830467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.371 [2024-07-24 20:52:11.830490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.830506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.834072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.372 [2024-07-24 20:52:11.843568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.372 [2024-07-24 20:52:11.843964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.372 [2024-07-24 20:52:11.843995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.372 [2024-07-24 20:52:11.844012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.372 [2024-07-24 20:52:11.844260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.372 [2024-07-24 20:52:11.844501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.372 [2024-07-24 20:52:11.844523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.844538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.848101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.372 [2024-07-24 20:52:11.857594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.372 [2024-07-24 20:52:11.858003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.372 [2024-07-24 20:52:11.858033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.372 [2024-07-24 20:52:11.858050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.372 [2024-07-24 20:52:11.858299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.372 [2024-07-24 20:52:11.858541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.372 [2024-07-24 20:52:11.858563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.858578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.862138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.372 [2024-07-24 20:52:11.871618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.372 [2024-07-24 20:52:11.872008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.372 [2024-07-24 20:52:11.872038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.372 [2024-07-24 20:52:11.872055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.372 [2024-07-24 20:52:11.872304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.372 [2024-07-24 20:52:11.872546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.372 [2024-07-24 20:52:11.872568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.872583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.876142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.372 [2024-07-24 20:52:11.885634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.372 [2024-07-24 20:52:11.886030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.372 [2024-07-24 20:52:11.886061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.372 [2024-07-24 20:52:11.886078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.372 [2024-07-24 20:52:11.886333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.372 [2024-07-24 20:52:11.886576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.372 [2024-07-24 20:52:11.886598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.886613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.890174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.372 [2024-07-24 20:52:11.899650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.372 [2024-07-24 20:52:11.900057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.372 [2024-07-24 20:52:11.900087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.372 [2024-07-24 20:52:11.900104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.372 [2024-07-24 20:52:11.900353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.372 [2024-07-24 20:52:11.900594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.372 [2024-07-24 20:52:11.900617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.900631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.904190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.372 [2024-07-24 20:52:11.913666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.372 [2024-07-24 20:52:11.914050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.372 [2024-07-24 20:52:11.914081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.372 [2024-07-24 20:52:11.914098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.372 [2024-07-24 20:52:11.914347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.372 [2024-07-24 20:52:11.914590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.372 [2024-07-24 20:52:11.914612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.914627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.918187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.372 [2024-07-24 20:52:11.927661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.372 [2024-07-24 20:52:11.928071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.372 [2024-07-24 20:52:11.928101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.372 [2024-07-24 20:52:11.928118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.372 [2024-07-24 20:52:11.928366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.372 [2024-07-24 20:52:11.928608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.372 [2024-07-24 20:52:11.928631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.372 [2024-07-24 20:52:11.928651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.372 [2024-07-24 20:52:11.932270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.631 [2024-07-24 20:52:11.941639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.631 [2024-07-24 20:52:11.942029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.631 [2024-07-24 20:52:11.942061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.631 [2024-07-24 20:52:11.942079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.631 [2024-07-24 20:52:11.942328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:11.942570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:11.942593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:11.942609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:11.946176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:11.955675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:11.956032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:11.956063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:11.956080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:11.956330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:11.956572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:11.956594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:11.956609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:11.960168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:11.969651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:11.970049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:11.970080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:11.970097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:11.970343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:11.970585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:11.970607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:11.970622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:11.974188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:11.983692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:11.984086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:11.984116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:11.984133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:11.984381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:11.984622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:11.984645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:11.984660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:11.988225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:11.997535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:11.997962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:11.997992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:11.998009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:11.998255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:11.998506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:11.998529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:11.998544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:12.002105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:12.011386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:12.011780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:12.011810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:12.011828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:12.012065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:12.012318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:12.012342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:12.012357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:12.015920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:12.025405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:12.025815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:12.025846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:12.025863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:12.026100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:12.026357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:12.026381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:12.026397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:12.029959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:12.039226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:12.039624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:12.039655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:12.039672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:12.039909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:12.040151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:12.040174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:12.040189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:12.043757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:12.053258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:12.053682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:12.053712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:12.053729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:12.053966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:12.054206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:12.054229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:12.054254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:12.057820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:12.067126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:12.067504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:12.067535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:12.067552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:12.067789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.632 [2024-07-24 20:52:12.068030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.632 [2024-07-24 20:52:12.068070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.632 [2024-07-24 20:52:12.068086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.632 [2024-07-24 20:52:12.071673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.632 [2024-07-24 20:52:12.081157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.632 [2024-07-24 20:52:12.081576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.632 [2024-07-24 20:52:12.081607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.632 [2024-07-24 20:52:12.081625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.632 [2024-07-24 20:52:12.081862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.082103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.082125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.082140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.085717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.095186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.095605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.095636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.095653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.095889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.096131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.096153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.096168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.099738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.109215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.109610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.109641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.109658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.109895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.110136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.110159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.110173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.113745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.123216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.123612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.123650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.123668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.123905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.124147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.124170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.124185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.127770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.137255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.137671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.137701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.137719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.137955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.138196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.138218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.138232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.141804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.151290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.151724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.151754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.151772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.152009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.152261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.152284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.152298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.155858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.165335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.165751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.165781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.165798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.166035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.166294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.166318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.166332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.169891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.179367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.179778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.179808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.179825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.180062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.180315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.180338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.180353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.633 [2024-07-24 20:52:12.183920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.633 [2024-07-24 20:52:12.193183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.633 [2024-07-24 20:52:12.193634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.633 [2024-07-24 20:52:12.193666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.633 [2024-07-24 20:52:12.193684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.633 [2024-07-24 20:52:12.193939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.633 [2024-07-24 20:52:12.194186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.633 [2024-07-24 20:52:12.194209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.633 [2024-07-24 20:52:12.194223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.892 [2024-07-24 20:52:12.197971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.892 [2024-07-24 20:52:12.207138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.892 [2024-07-24 20:52:12.207543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.892 [2024-07-24 20:52:12.207576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.892 [2024-07-24 20:52:12.207593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.892 [2024-07-24 20:52:12.207831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.892 [2024-07-24 20:52:12.208072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.892 [2024-07-24 20:52:12.208095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.892 [2024-07-24 20:52:12.208110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.892 [2024-07-24 20:52:12.211685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.892 [2024-07-24 20:52:12.221162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.892 [2024-07-24 20:52:12.221556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.892 [2024-07-24 20:52:12.221588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.892 [2024-07-24 20:52:12.221605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.892 [2024-07-24 20:52:12.221842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.892 [2024-07-24 20:52:12.222084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.222107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.222122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.225690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.235170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.235596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.235645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.235882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.236123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.236145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.236160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.239731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.249001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.249417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.249447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.249464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.249701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.249942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.249965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.249980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.253563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.262858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.263253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.263283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.263306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.263543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.263785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.263807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.263822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.267395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.276884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.277300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.277331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.277349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.277586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.277828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.277850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.277865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.281440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.290727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.291143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.291175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.291191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.291438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.291680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.291703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.291717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.295298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.304591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.304961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.304991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.305009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.305257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.305500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.305532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.305548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.309113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.318618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.319002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.319032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.319049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.319298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.319541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.319563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.319579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.323142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.332642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.333036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.333066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.333083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.333330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.333572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.333595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.333610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.337173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.346665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.893 [2024-07-24 20:52:12.347050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.893 [2024-07-24 20:52:12.347080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.893 [2024-07-24 20:52:12.347097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.893 [2024-07-24 20:52:12.347345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.893 [2024-07-24 20:52:12.347588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.893 [2024-07-24 20:52:12.347611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.893 [2024-07-24 20:52:12.347625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.893 [2024-07-24 20:52:12.351199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.893 [2024-07-24 20:52:12.360696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.894 [2024-07-24 20:52:12.361087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.894 [2024-07-24 20:52:12.361117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.894 [2024-07-24 20:52:12.361134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.894 [2024-07-24 20:52:12.361382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.894 [2024-07-24 20:52:12.361624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.894 [2024-07-24 20:52:12.361647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.894 [2024-07-24 20:52:12.361662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.894 [2024-07-24 20:52:12.365225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.894 [2024-07-24 20:52:12.374727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.894 [2024-07-24 20:52:12.375147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.894 [2024-07-24 20:52:12.375177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.894 [2024-07-24 20:52:12.375194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.894 [2024-07-24 20:52:12.375441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.894 [2024-07-24 20:52:12.375683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.894 [2024-07-24 20:52:12.375705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.894 [2024-07-24 20:52:12.375720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.894 [2024-07-24 20:52:12.379293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.894 [2024-07-24 20:52:12.388672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.894 [2024-07-24 20:52:12.389036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.894 [2024-07-24 20:52:12.389066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.894 [2024-07-24 20:52:12.389083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.894 [2024-07-24 20:52:12.389332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.894 [2024-07-24 20:52:12.389574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.894 [2024-07-24 20:52:12.389597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.894 [2024-07-24 20:52:12.389612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.894 [2024-07-24 20:52:12.393176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.894 [2024-07-24 20:52:12.402673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.894 [2024-07-24 20:52:12.403058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.894 [2024-07-24 20:52:12.403089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.894 [2024-07-24 20:52:12.403106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.894 [2024-07-24 20:52:12.403361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.894 [2024-07-24 20:52:12.403604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.894 [2024-07-24 20:52:12.403627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.894 [2024-07-24 20:52:12.403642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.894 [2024-07-24 20:52:12.407205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.894 [2024-07-24 20:52:12.416700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.894 [2024-07-24 20:52:12.417114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.894 [2024-07-24 20:52:12.417145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.894 [2024-07-24 20:52:12.417162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.894 [2024-07-24 20:52:12.417410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.894 [2024-07-24 20:52:12.417652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.894 [2024-07-24 20:52:12.417674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.894 [2024-07-24 20:52:12.417689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.894 [2024-07-24 20:52:12.421259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.894 [2024-07-24 20:52:12.430529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.894 [2024-07-24 20:52:12.430949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.894 [2024-07-24 20:52:12.430979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.894 [2024-07-24 20:52:12.430996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.894 [2024-07-24 20:52:12.431234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.894 [2024-07-24 20:52:12.431485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.894 [2024-07-24 20:52:12.431508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.894 [2024-07-24 20:52:12.431522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.894 [2024-07-24 20:52:12.435084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.894 [2024-07-24 20:52:12.444564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.894 [2024-07-24 20:52:12.444969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.894 [2024-07-24 20:52:12.444999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:16.894 [2024-07-24 20:52:12.445016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:16.894 [2024-07-24 20:52:12.445263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:16.894 [2024-07-24 20:52:12.445509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.894 [2024-07-24 20:52:12.445531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.894 [2024-07-24 20:52:12.445552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.894 [2024-07-24 20:52:12.449113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.458621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.459051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.459084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.459103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.459357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.459601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.459623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.459638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.463278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.472621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.473038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.473070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.473087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.473337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.473580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.473602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.473617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.477177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.486466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.486891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.486921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.486939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.487176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.487427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.487451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.487466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.491024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.500298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.500686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.500723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.500741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.500978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.501219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.501251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.501269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.504830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.514314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.514724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.514754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.514771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.515009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.515260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.515283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.515298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.518858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.528337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.528744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.528774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.528792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.529029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.529281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.529304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.529319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.532880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.542360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.542752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.542783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.542800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.543037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.543296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.543319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.543334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.546890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.556382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.556799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.556829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.556846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.557083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.557335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.557359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.557374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.560932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.570408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.570840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.570870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.570888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.571124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.571377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.154 [2024-07-24 20:52:12.571400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.154 [2024-07-24 20:52:12.571415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.154 [2024-07-24 20:52:12.574973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.154 [2024-07-24 20:52:12.584461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.154 [2024-07-24 20:52:12.584872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.154 [2024-07-24 20:52:12.584903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.154 [2024-07-24 20:52:12.584920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.154 [2024-07-24 20:52:12.585157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.154 [2024-07-24 20:52:12.585409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.585432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.585447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.589011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.598489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.598876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.598906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.598923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.599160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.599412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.599436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.599450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.603012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.612491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.612910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.612940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.612958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.613194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.613445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.613469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.613484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.617046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.626315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.626695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.626725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.626742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.626979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.627219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.627251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.627268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.630828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.640325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.640740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.640770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.640793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.641031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.641283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.641306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.641321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.644879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.654369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.654751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.654781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.654798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.655035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.655288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.655312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.655326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.658890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.668368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.668783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.668814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.668831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.669068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.669321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.669344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.669359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.672916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.682385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.682793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.682822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.682839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.683075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.683336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.683366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.683382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.686942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.696200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.696598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.696629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.696647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.696884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.697125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.697147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.697162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.700733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.155 [2024-07-24 20:52:12.710208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.155 [2024-07-24 20:52:12.710621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.155 [2024-07-24 20:52:12.710652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.155 [2024-07-24 20:52:12.710669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.155 [2024-07-24 20:52:12.710906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.155 [2024-07-24 20:52:12.711147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.155 [2024-07-24 20:52:12.711170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.155 [2024-07-24 20:52:12.711184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.155 [2024-07-24 20:52:12.714752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.414 [2024-07-24 20:52:12.724095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.724515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.724557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.724586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.724829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.725091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.725117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.725132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.728708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.737980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.738375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.738406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.738424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.738661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.738903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.738925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.738940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.742511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.751994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.752382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.752413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.752430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.752667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.752908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.752930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.752946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.756518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.765992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.766421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.766451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.766469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.766706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.766947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.766969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.766984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.770555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.779816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.780215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.780254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.780279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.780518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.780759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.780782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.780796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.784378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.793846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.794333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.794365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.794382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.794619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.794861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.794883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.794898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.798465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.807730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.808190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.808221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.808238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.808486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.808731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.808753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.808769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.812351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.821623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.822046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.822077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.822094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.822342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.822584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.822612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.822628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.826189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.835789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.836226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.836272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.836292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.836539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.836793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.836817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.836832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.415 [2024-07-24 20:52:12.840413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.415 [2024-07-24 20:52:12.849681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.415 [2024-07-24 20:52:12.850193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.415 [2024-07-24 20:52:12.850252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.415 [2024-07-24 20:52:12.850271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.415 [2024-07-24 20:52:12.850509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.415 [2024-07-24 20:52:12.850750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.415 [2024-07-24 20:52:12.850773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.415 [2024-07-24 20:52:12.850787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.854372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.863641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.864034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.864066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.864083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.864333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.864576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.864598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.864613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.868174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.877653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.878083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.878114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.878131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.878379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.878621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.878643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.878658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.882222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.891498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.891975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.892026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.892043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.892322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.892564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.892587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.892601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.896160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.905428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.905839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.905869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.905886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.906123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.906376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.906400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.906415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.909976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.919453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.919871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.919901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.919918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.920161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.920413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.920437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.920452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.924011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.933484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.933894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.933923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.933940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.934177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.934428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.934451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.934467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.938026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.947496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.947902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.947932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.947949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.948186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.948438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.948461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.948476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.952036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.961342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.961763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.961793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.961810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.962048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.962300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.962324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.962345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.965905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.416 [2024-07-24 20:52:12.975165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.416 [2024-07-24 20:52:12.975593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.416 [2024-07-24 20:52:12.975624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.416 [2024-07-24 20:52:12.975641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.416 [2024-07-24 20:52:12.975877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.416 [2024-07-24 20:52:12.976118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.416 [2024-07-24 20:52:12.976140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.416 [2024-07-24 20:52:12.976155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.416 [2024-07-24 20:52:12.979873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.675 [2024-07-24 20:52:12.989158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.675 [2024-07-24 20:52:12.989570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.675 [2024-07-24 20:52:12.989602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.675 [2024-07-24 20:52:12.989620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.675 [2024-07-24 20:52:12.989857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.675 [2024-07-24 20:52:12.990099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.675 [2024-07-24 20:52:12.990122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.675 [2024-07-24 20:52:12.990137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.675 [2024-07-24 20:52:12.993711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.675 [2024-07-24 20:52:13.003185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.003619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.003650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.003667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.003905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.004146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.004168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.004183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.007989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.017044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.017469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.017509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.017527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.017765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.018006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.018029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.018043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.021614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.030875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.031306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.031336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.031353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.031590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.031832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.031855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.031869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.035439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.044700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.045097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.045128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.045145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.045394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.045637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.045659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.045674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.049239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.058523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.058939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.058969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.058986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.059223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.059481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.059505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.059520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.063080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.072349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.072732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.072762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.072779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.073016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.073268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.073292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.073306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.076863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.086368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.086751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.086782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.086800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.087037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.087290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.087314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.087329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.090890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.100373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.100853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.100904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.100921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.101158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.101410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.101434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.101449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.105013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.114288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.114693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.114723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.114740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.114977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.115218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.115251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.115269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.118832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.128317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.128772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.676 [2024-07-24 20:52:13.128802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.676 [2024-07-24 20:52:13.128819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.676 [2024-07-24 20:52:13.129057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.676 [2024-07-24 20:52:13.129308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.676 [2024-07-24 20:52:13.129332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.676 [2024-07-24 20:52:13.129346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.676 [2024-07-24 20:52:13.132911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.676 [2024-07-24 20:52:13.142182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.676 [2024-07-24 20:52:13.142598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.142628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.142645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.142882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.143123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.143145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.143159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.677 [2024-07-24 20:52:13.146736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.677 [2024-07-24 20:52:13.156230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.677 [2024-07-24 20:52:13.156767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.156820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.156843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.157081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.157335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.157359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.157374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.677 [2024-07-24 20:52:13.160934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.677 [2024-07-24 20:52:13.170198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.677 [2024-07-24 20:52:13.170576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.170608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.170625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.170863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.171104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.171127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.171142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.677 [2024-07-24 20:52:13.174716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.677 [2024-07-24 20:52:13.184213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.677 [2024-07-24 20:52:13.184699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.184751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.184768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.185006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.185256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.185280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.185295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.677 [2024-07-24 20:52:13.188855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.677 [2024-07-24 20:52:13.198128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.677 [2024-07-24 20:52:13.198551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.198582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.198599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.198836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.199077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.199105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.199121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.677 [2024-07-24 20:52:13.202693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.677 [2024-07-24 20:52:13.211968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.677 [2024-07-24 20:52:13.212385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.212417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.212434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.212672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.212913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.212935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.212950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.677 [2024-07-24 20:52:13.216544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.677 [2024-07-24 20:52:13.225814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.677 [2024-07-24 20:52:13.226199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.226231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.226259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.226498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.226739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.226762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.226777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.677 [2024-07-24 20:52:13.230349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.677 [2024-07-24 20:52:13.239986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.677 [2024-07-24 20:52:13.240422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.677 [2024-07-24 20:52:13.240454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.677 [2024-07-24 20:52:13.240472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.677 [2024-07-24 20:52:13.240709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.677 [2024-07-24 20:52:13.240950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.677 [2024-07-24 20:52:13.240972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.677 [2024-07-24 20:52:13.240987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.945 [2024-07-24 20:52:13.244688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.945 [2024-07-24 20:52:13.253862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.945 [2024-07-24 20:52:13.254287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.945 [2024-07-24 20:52:13.254319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.945 [2024-07-24 20:52:13.254336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.945 [2024-07-24 20:52:13.254574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.945 [2024-07-24 20:52:13.254815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.945 [2024-07-24 20:52:13.254837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.945 [2024-07-24 20:52:13.254853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.945 [2024-07-24 20:52:13.258426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.945 [2024-07-24 20:52:13.267692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.945 [2024-07-24 20:52:13.268102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.945 [2024-07-24 20:52:13.268132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.945 [2024-07-24 20:52:13.268149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.945 [2024-07-24 20:52:13.268397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.945 [2024-07-24 20:52:13.268639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.945 [2024-07-24 20:52:13.268661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.945 [2024-07-24 20:52:13.268676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.945 [2024-07-24 20:52:13.272248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.945 [2024-07-24 20:52:13.281717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.945 [2024-07-24 20:52:13.282132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.945 [2024-07-24 20:52:13.282162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.945 [2024-07-24 20:52:13.282179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.945 [2024-07-24 20:52:13.282426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.945 [2024-07-24 20:52:13.282668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.946 [2024-07-24 20:52:13.282691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.946 [2024-07-24 20:52:13.282705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.946 [2024-07-24 20:52:13.286284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.946 [2024-07-24 20:52:13.295543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.946 [2024-07-24 20:52:13.295964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.946 [2024-07-24 20:52:13.295994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.946 [2024-07-24 20:52:13.296012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.946 [2024-07-24 20:52:13.296266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.946 [2024-07-24 20:52:13.296509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.946 [2024-07-24 20:52:13.296532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.946 [2024-07-24 20:52:13.296546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.946 [2024-07-24 20:52:13.300107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.946 [2024-07-24 20:52:13.309586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.946 [2024-07-24 20:52:13.309997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.946 [2024-07-24 20:52:13.310028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.946 [2024-07-24 20:52:13.310045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.946 [2024-07-24 20:52:13.310294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.946 [2024-07-24 20:52:13.310536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.946 [2024-07-24 20:52:13.310558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.946 [2024-07-24 20:52:13.310573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.946 [2024-07-24 20:52:13.314131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.946 [2024-07-24 20:52:13.323608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.946 [2024-07-24 20:52:13.324077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.946 [2024-07-24 20:52:13.324127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.946 [2024-07-24 20:52:13.324145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.946 [2024-07-24 20:52:13.324393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.946 [2024-07-24 20:52:13.324634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.946 [2024-07-24 20:52:13.324656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.946 [2024-07-24 20:52:13.324671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.946 [2024-07-24 20:52:13.328231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.946 [2024-07-24 20:52:13.337495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.946 [2024-07-24 20:52:13.337980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.946 [2024-07-24 20:52:13.338031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.946 [2024-07-24 20:52:13.338048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.946 [2024-07-24 20:52:13.338296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.946 [2024-07-24 20:52:13.338538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.946 [2024-07-24 20:52:13.338560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.946 [2024-07-24 20:52:13.338580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.946 [2024-07-24 20:52:13.342225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.946 [2024-07-24 20:52:13.351491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.946 [2024-07-24 20:52:13.351896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.946 [2024-07-24 20:52:13.351927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.946 [2024-07-24 20:52:13.351944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.946 [2024-07-24 20:52:13.352181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.946 [2024-07-24 20:52:13.352432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.946 [2024-07-24 20:52:13.352456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.946 [2024-07-24 20:52:13.352471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.946 [2024-07-24 20:52:13.356044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.946 [2024-07-24 20:52:13.365315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.365735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.365764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.365781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.366018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.366271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.366294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.366309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.369868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.379132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.379533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.379563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.379581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.379818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.380059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.380081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.380096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.383666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.393146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.393549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.393581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.393598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.393835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.394076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.394098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.394114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.397685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.407188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.407615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.407646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.407664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.407901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.408142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.408165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.408180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.411757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.421033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.421452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.421483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.421500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.421736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.421978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.422001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.422016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.425591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.434866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.435325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.435356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.435374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.435617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.435858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.435881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.435896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.439490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.448763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.449148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.449179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.449197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.449446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.449688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.449711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.449725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.453301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.462804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.463230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.463269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.463287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.463524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.463765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.463788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.463803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.467376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.476682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.477066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.477098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.477115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.477363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.477606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.477628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.477649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.481216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.490515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.491043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.491104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.491122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.491369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.491612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.491636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.491650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.947 [2024-07-24 20:52:13.495216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.947 [2024-07-24 20:52:13.504418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.947 [2024-07-24 20:52:13.504823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.947 [2024-07-24 20:52:13.504856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:17.947 [2024-07-24 20:52:13.504874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:17.947 [2024-07-24 20:52:13.505112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:17.947 [2024-07-24 20:52:13.505365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.947 [2024-07-24 20:52:13.505389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.947 [2024-07-24 20:52:13.505404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.207 [2024-07-24 20:52:13.509078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.207 [2024-07-24 20:52:13.518490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.207 [2024-07-24 20:52:13.518888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.207 [2024-07-24 20:52:13.518921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.207 [2024-07-24 20:52:13.518938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.207 [2024-07-24 20:52:13.519177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.207 [2024-07-24 20:52:13.519431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.207 [2024-07-24 20:52:13.519455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.207 [2024-07-24 20:52:13.519470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.207 [2024-07-24 20:52:13.523045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.207 [2024-07-24 20:52:13.532352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.207 [2024-07-24 20:52:13.532840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.207 [2024-07-24 20:52:13.532876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.207 [2024-07-24 20:52:13.532894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.207 [2024-07-24 20:52:13.533132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.207 [2024-07-24 20:52:13.533387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.207 [2024-07-24 20:52:13.533411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.207 [2024-07-24 20:52:13.533426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.207 [2024-07-24 20:52:13.536992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.207 [2024-07-24 20:52:13.546279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.207 [2024-07-24 20:52:13.546756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.207 [2024-07-24 20:52:13.546787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.207 [2024-07-24 20:52:13.546804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.207 [2024-07-24 20:52:13.547041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.207 [2024-07-24 20:52:13.547293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.207 [2024-07-24 20:52:13.547316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.207 [2024-07-24 20:52:13.547331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.207 [2024-07-24 20:52:13.550894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.207 [2024-07-24 20:52:13.560170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.207 [2024-07-24 20:52:13.560589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.207 [2024-07-24 20:52:13.560620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.207 [2024-07-24 20:52:13.560638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.207 [2024-07-24 20:52:13.560875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.207 [2024-07-24 20:52:13.561116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.207 [2024-07-24 20:52:13.561138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.207 [2024-07-24 20:52:13.561153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.207 [2024-07-24 20:52:13.564726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.207 [2024-07-24 20:52:13.574199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.207 [2024-07-24 20:52:13.574725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.207 [2024-07-24 20:52:13.574774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.207 [2024-07-24 20:52:13.574792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.207 [2024-07-24 20:52:13.575029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.207 [2024-07-24 20:52:13.575287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.207 [2024-07-24 20:52:13.575310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.207 [2024-07-24 20:52:13.575325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.207 [2024-07-24 20:52:13.578887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.207 [2024-07-24 20:52:13.588170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.207 [2024-07-24 20:52:13.588634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.207 [2024-07-24 20:52:13.588664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.207 [2024-07-24 20:52:13.588682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.207 [2024-07-24 20:52:13.588919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.207 [2024-07-24 20:52:13.589160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.207 [2024-07-24 20:52:13.589182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.207 [2024-07-24 20:52:13.589197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.207 [2024-07-24 20:52:13.592771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.207 [2024-07-24 20:52:13.602040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.207 [2024-07-24 20:52:13.602431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.207 [2024-07-24 20:52:13.602463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.207 [2024-07-24 20:52:13.602480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.207 [2024-07-24 20:52:13.602717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.207 [2024-07-24 20:52:13.602958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.207 [2024-07-24 20:52:13.602981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.207 [2024-07-24 20:52:13.602995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.606566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.616054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.616445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.616477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.616494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.616731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.616972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.616994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.617009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.620583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.630050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.630570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.630620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.630638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.630875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.631116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.631138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.631153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.634722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.643985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.644376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.644407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.644424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.644661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.644902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.644924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.644939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.648510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.657992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.658517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.658568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.658585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.658822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.659062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.659085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.659100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.662682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.671942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.672365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.672395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.672418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.672656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.672897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.672919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.672935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.676504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.685981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.686375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.686406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.686423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.686660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.686901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.686924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.686939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.690507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.699976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.700391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.700422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.700439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.700677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.700918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.700940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.700954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.704526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.713995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.714411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.714443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.714460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.714697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.714939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.714970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.714986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.718557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.728023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.728411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.728441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.728458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.728695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.728937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.728959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.728974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.732545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.208 [2024-07-24 20:52:13.742014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.208 [2024-07-24 20:52:13.742407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.208 [2024-07-24 20:52:13.742437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.208 [2024-07-24 20:52:13.742454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.208 [2024-07-24 20:52:13.742691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.208 [2024-07-24 20:52:13.742932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-24 20:52:13.742955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.208 [2024-07-24 20:52:13.742970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 [2024-07-24 20:52:13.746539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.209 [2024-07-24 20:52:13.756021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.209 [2024-07-24 20:52:13.756410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.209 [2024-07-24 20:52:13.756441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.209 [2024-07-24 20:52:13.756458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.209 [2024-07-24 20:52:13.756695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.209 [2024-07-24 20:52:13.756936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.209 [2024-07-24 20:52:13.756958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.209 [2024-07-24 20:52:13.756973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.209 [2024-07-24 20:52:13.760543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.209 [2024-07-24 20:52:13.770112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.209 [2024-07-24 20:52:13.770547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.209 [2024-07-24 20:52:13.770580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.209 [2024-07-24 20:52:13.770598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.209 [2024-07-24 20:52:13.770838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.209 [2024-07-24 20:52:13.771098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.209 [2024-07-24 20:52:13.771123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.209 [2024-07-24 20:52:13.771138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.467 [2024-07-24 20:52:13.774828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.467 [2024-07-24 20:52:13.784012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.467 [2024-07-24 20:52:13.784464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.467 [2024-07-24 20:52:13.784496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.467 [2024-07-24 20:52:13.784514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.467 [2024-07-24 20:52:13.784752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.467 [2024-07-24 20:52:13.784994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.467 [2024-07-24 20:52:13.785017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.467 [2024-07-24 20:52:13.785032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.467 [2024-07-24 20:52:13.788629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.467 [2024-07-24 20:52:13.797910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.467 [2024-07-24 20:52:13.798277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.467 [2024-07-24 20:52:13.798309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.467 [2024-07-24 20:52:13.798326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.467 [2024-07-24 20:52:13.798564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.467 [2024-07-24 20:52:13.798805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.467 [2024-07-24 20:52:13.798827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.467 [2024-07-24 20:52:13.798842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.467 [2024-07-24 20:52:13.802408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.467 [2024-07-24 20:52:13.811893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.467 [2024-07-24 20:52:13.812310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.467 [2024-07-24 20:52:13.812351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.467 [2024-07-24 20:52:13.812369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.812613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.812855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.812878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.812892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.816460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.825739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.826166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.826197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.826214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.826461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.826703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.826726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.826741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.830317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.839591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.839978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.840008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.840025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.840273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.840522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.840545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.840560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.844119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.853610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.854020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.854050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.854067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.854317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.854560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.854582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.854603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.858179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.867451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.867860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.867890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.867907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.868144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.868398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.868422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.868436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.871996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.881476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.881863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.881893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.881910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.882147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.882401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.882424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.882439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.886008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.895486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.895895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.895925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.895942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.896179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.896432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.896455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.896470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.900030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.909511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.909928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.909958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.909975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.910212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.910464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.910487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.910502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.914063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.923535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.923942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.923972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.923989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.924226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.924478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.924501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.924515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.928072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.937545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.937927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.937957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.937974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.468 [2024-07-24 20:52:13.938211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.468 [2024-07-24 20:52:13.938462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.468 [2024-07-24 20:52:13.938485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.468 [2024-07-24 20:52:13.938500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-07-24 20:52:13.942064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.468 [2024-07-24 20:52:13.951531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-07-24 20:52:13.951914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.468 [2024-07-24 20:52:13.951944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.468 [2024-07-24 20:52:13.951960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.469 [2024-07-24 20:52:13.952203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.469 [2024-07-24 20:52:13.952457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.469 [2024-07-24 20:52:13.952479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.469 [2024-07-24 20:52:13.952494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.469 [2024-07-24 20:52:13.956049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1691087 Killed "${NVMF_APP[@]}" "$@" 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:18.469 [2024-07-24 20:52:13.965529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.469 [2024-07-24 20:52:13.965934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.469 [2024-07-24 20:52:13.965974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.469 [2024-07-24 20:52:13.965991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.469 [2024-07-24 20:52:13.966228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.469 [2024-07-24 20:52:13.966480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.469 [2024-07-24 20:52:13.966504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.469 [2024-07-24 20:52:13.966518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1692175 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1692175 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1692175 ']' 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.469 20:52:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:18.469 [2024-07-24 20:52:13.970076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.469 [2024-07-24 20:52:13.979552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.469 [2024-07-24 20:52:13.979937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.469 [2024-07-24 20:52:13.979968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.469 [2024-07-24 20:52:13.979985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.469 [2024-07-24 20:52:13.980229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.469 [2024-07-24 20:52:13.980511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.469 [2024-07-24 20:52:13.980534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.469 [2024-07-24 20:52:13.980549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.469 [2024-07-24 20:52:13.984105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.469 [2024-07-24 20:52:13.993580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.469 [2024-07-24 20:52:13.993971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.469 [2024-07-24 20:52:13.994001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.469 [2024-07-24 20:52:13.994019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.469 [2024-07-24 20:52:13.994265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.469 [2024-07-24 20:52:13.994507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.469 [2024-07-24 20:52:13.994530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.469 [2024-07-24 20:52:13.994544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.469 [2024-07-24 20:52:13.998102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.469 [2024-07-24 20:52:14.007571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.469 [2024-07-24 20:52:14.007967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.469 [2024-07-24 20:52:14.007997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.469 [2024-07-24 20:52:14.008015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.469 [2024-07-24 20:52:14.008262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.469 [2024-07-24 20:52:14.008504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.469 [2024-07-24 20:52:14.008527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.469 [2024-07-24 20:52:14.008542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.469 [2024-07-24 20:52:14.012102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.469 [2024-07-24 20:52:14.016220] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:18.469 [2024-07-24 20:52:14.016307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.469 [2024-07-24 20:52:14.021573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.469 [2024-07-24 20:52:14.021960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.469 [2024-07-24 20:52:14.021990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.469 [2024-07-24 20:52:14.022008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.469 [2024-07-24 20:52:14.022257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.469 [2024-07-24 20:52:14.022507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.469 [2024-07-24 20:52:14.022530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.469 [2024-07-24 20:52:14.022545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.469 [2024-07-24 20:52:14.026280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.728 [2024-07-24 20:52:14.035591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.728 [2024-07-24 20:52:14.036020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.728 [2024-07-24 20:52:14.036051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.728 [2024-07-24 20:52:14.036069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.728 [2024-07-24 20:52:14.036317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.728 [2024-07-24 20:52:14.036560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.728 [2024-07-24 20:52:14.036583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.728 [2024-07-24 20:52:14.036598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.728 [2024-07-24 20:52:14.040255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.728 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.728 [2024-07-24 20:52:14.049531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.728 [2024-07-24 20:52:14.049905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.728 [2024-07-24 20:52:14.049937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.728 [2024-07-24 20:52:14.049955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.728 [2024-07-24 20:52:14.050193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.728 [2024-07-24 20:52:14.050445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.728 [2024-07-24 20:52:14.050468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.728 [2024-07-24 20:52:14.050483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.728 [2024-07-24 20:52:14.054047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.728 [2024-07-24 20:52:14.063361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.728 [2024-07-24 20:52:14.063800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.728 [2024-07-24 20:52:14.063841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.728 [2024-07-24 20:52:14.063857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.728 [2024-07-24 20:52:14.064086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.728 [2024-07-24 20:52:14.064334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.728 [2024-07-24 20:52:14.064355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.728 [2024-07-24 20:52:14.064372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.728 [2024-07-24 20:52:14.067344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.728 [2024-07-24 20:52:14.076657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.728 [2024-07-24 20:52:14.077020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.728 [2024-07-24 20:52:14.077047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.728 [2024-07-24 20:52:14.077063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.728 [2024-07-24 20:52:14.077316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.077521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.077540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.077553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.080530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.089965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.090377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.090405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.090421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.090647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.090861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.090879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.090892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.093781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:18.729 [2024-07-24 20:52:14.093885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.103150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.103738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.103776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.103809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.104069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.104299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.104320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.104337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.107346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.116477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.116923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.116967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.116985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.117273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.117486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.117506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.117520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.120509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.129818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.130229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.130262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.130295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.130527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.130759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.130778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.130790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.133761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.143043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.143467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.143496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.143511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.143752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.143966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.143985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.143997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.146973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.156234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.156641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.156683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.156698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.156958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.157188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.157207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.157219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.160194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.169548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.170210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.170254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.170274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.170521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.170739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.170758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.170773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.173797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.182878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.183355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.183384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.183401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.183643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.183857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.183876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.183888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.186838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.196067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.196483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.196511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.196527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.196766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.729 [2024-07-24 20:52:14.196981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.729 [2024-07-24 20:52:14.197000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.729 [2024-07-24 20:52:14.197023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.729 [2024-07-24 20:52:14.200006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.729 [2024-07-24 20:52:14.209372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.729 [2024-07-24 20:52:14.209789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.729 [2024-07-24 20:52:14.209816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.729 [2024-07-24 20:52:14.209831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.729 [2024-07-24 20:52:14.210050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.730 [2024-07-24 20:52:14.210288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.730 [2024-07-24 20:52:14.210309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.730 [2024-07-24 20:52:14.210322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.730 [2024-07-24 20:52:14.213329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.730 [2024-07-24 20:52:14.222596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.730 [2024-07-24 20:52:14.223003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.730 [2024-07-24 20:52:14.223029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.730 [2024-07-24 20:52:14.223044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.730 [2024-07-24 20:52:14.223309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.730 [2024-07-24 20:52:14.223513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.730 [2024-07-24 20:52:14.223532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.730 [2024-07-24 20:52:14.223545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.730 [2024-07-24 20:52:14.226513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.730 [2024-07-24 20:52:14.229976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.730 [2024-07-24 20:52:14.230017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.730 [2024-07-24 20:52:14.230051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.730 [2024-07-24 20:52:14.230078] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.730 [2024-07-24 20:52:14.230093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.730 [2024-07-24 20:52:14.230185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.730 [2024-07-24 20:52:14.230264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.730 [2024-07-24 20:52:14.230286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.730 [2024-07-24 20:52:14.236066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.730 [2024-07-24 20:52:14.236517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.730 [2024-07-24 20:52:14.236549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.730 [2024-07-24 20:52:14.236567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.730 [2024-07-24 20:52:14.236794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.730 [2024-07-24 20:52:14.237014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.730 [2024-07-24 20:52:14.237035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.730 [2024-07-24 20:52:14.237050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.730 [2024-07-24 20:52:14.240351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.730 [2024-07-24 20:52:14.249574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.730 [2024-07-24 20:52:14.250094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.730 [2024-07-24 20:52:14.250132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.730 [2024-07-24 20:52:14.250151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.730 [2024-07-24 20:52:14.250383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.730 [2024-07-24 20:52:14.250618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.730 [2024-07-24 20:52:14.250639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.730 [2024-07-24 20:52:14.250655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.730 [2024-07-24 20:52:14.253892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.730 [2024-07-24 20:52:14.263276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.730 [2024-07-24 20:52:14.263850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.730 [2024-07-24 20:52:14.263904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.730 [2024-07-24 20:52:14.263923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.730 [2024-07-24 20:52:14.264161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.730 [2024-07-24 20:52:14.264416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.730 [2024-07-24 20:52:14.264438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.730 [2024-07-24 20:52:14.264455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.730 [2024-07-24 20:52:14.267630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.730 [2024-07-24 20:52:14.276812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.730 [2024-07-24 20:52:14.277420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.730 [2024-07-24 20:52:14.277460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.730 [2024-07-24 20:52:14.277479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.730 [2024-07-24 20:52:14.277718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.730 [2024-07-24 20:52:14.277933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.730 [2024-07-24 20:52:14.277954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.730 [2024-07-24 20:52:14.277983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.730 [2024-07-24 20:52:14.281130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.730 [2024-07-24 20:52:14.290372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.730 [2024-07-24 20:52:14.290845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.730 [2024-07-24 20:52:14.290885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.730 [2024-07-24 20:52:14.290904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.730 [2024-07-24 20:52:14.291127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.730 [2024-07-24 20:52:14.291388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.730 [2024-07-24 20:52:14.291410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.730 [2024-07-24 20:52:14.291426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.294895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.303806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.304320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.304360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.304379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.304626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.304841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.304861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.304877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.308003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.317355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.317968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.318006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.318025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.318255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.318476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.318497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.318512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.321683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.330790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.331192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.331228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.331254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.331470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.331699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.331719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.331733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.334914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.344199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.344603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.344630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.344645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.344859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.345084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.345104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.345117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.348294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.357748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.358160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.358188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.358204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.358426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.358656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.358676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.358689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.361852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.371291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.371698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.371727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.371743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.371956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.372190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.372211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.372224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.375401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.384675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.385059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.385086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.385102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.385325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.385557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.385578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.385591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.388747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.398199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.398609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.398637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.398652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.398865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.399091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.399111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.399124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.402295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.411668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.412020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.412047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.412063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.412284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.412502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.412522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.412535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.415708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.425190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.425567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.425595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.425610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.425824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.426049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.426069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.426082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.429233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.438796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.439160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.439188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.439203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.439422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.989 [2024-07-24 20:52:14.439640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.989 [2024-07-24 20:52:14.439660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.989 [2024-07-24 20:52:14.439673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.989 [2024-07-24 20:52:14.442837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.989 [2024-07-24 20:52:14.452282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.989 [2024-07-24 20:52:14.452701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.989 [2024-07-24 20:52:14.452730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.989 [2024-07-24 20:52:14.452745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.989 [2024-07-24 20:52:14.452974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.453185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.453205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.453217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.456400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.990 [2024-07-24 20:52:14.465734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.990 [2024-07-24 20:52:14.466111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.990 [2024-07-24 20:52:14.466138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.990 [2024-07-24 20:52:14.466159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.990 [2024-07-24 20:52:14.466381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.466613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.466633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.466646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.469792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.990 [2024-07-24 20:52:14.479122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.990 [2024-07-24 20:52:14.479514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.990 [2024-07-24 20:52:14.479542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.990 [2024-07-24 20:52:14.479557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.990 [2024-07-24 20:52:14.479785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.479995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.480015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.480027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.483185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.990 [2024-07-24 20:52:14.492515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.990 [2024-07-24 20:52:14.492929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.990 [2024-07-24 20:52:14.492957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.990 [2024-07-24 20:52:14.492972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.990 [2024-07-24 20:52:14.493186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.493452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.493473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.493486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.496829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.990 [2024-07-24 20:52:14.505985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.990 [2024-07-24 20:52:14.506371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.990 [2024-07-24 20:52:14.506398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.990 [2024-07-24 20:52:14.506414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.990 [2024-07-24 20:52:14.506642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.506852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.506877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.506890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.510132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.990 [2024-07-24 20:52:14.519514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.990 [2024-07-24 20:52:14.519861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.990 [2024-07-24 20:52:14.519888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.990 [2024-07-24 20:52:14.519903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.990 [2024-07-24 20:52:14.520131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.520369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.520391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.520405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.523579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.990 [2024-07-24 20:52:14.533031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.990 [2024-07-24 20:52:14.533446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.990 [2024-07-24 20:52:14.533473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.990 [2024-07-24 20:52:14.533489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.990 [2024-07-24 20:52:14.533702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.533928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.533948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.533960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.537111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.990 [2024-07-24 20:52:14.546452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.990 [2024-07-24 20:52:14.546854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:18.990 [2024-07-24 20:52:14.546881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:18.990 [2024-07-24 20:52:14.546897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:18.990 [2024-07-24 20:52:14.547110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:18.990 [2024-07-24 20:52:14.547364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.990 [2024-07-24 20:52:14.547386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:18.990 [2024-07-24 20:52:14.547399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.990 [2024-07-24 20:52:14.550561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.248 [2024-07-24 20:52:14.560086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.248 [2024-07-24 20:52:14.560513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.248 [2024-07-24 20:52:14.560543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.248 [2024-07-24 20:52:14.560559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.248 [2024-07-24 20:52:14.560773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.248 [2024-07-24 20:52:14.561000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.248 [2024-07-24 20:52:14.561020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.248 [2024-07-24 20:52:14.561033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.248 [2024-07-24 20:52:14.564251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.248 [2024-07-24 20:52:14.573601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.248 [2024-07-24 20:52:14.573955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.248 [2024-07-24 20:52:14.573983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.248 [2024-07-24 20:52:14.573998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.248 [2024-07-24 20:52:14.574227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.248 [2024-07-24 20:52:14.574467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.248 [2024-07-24 20:52:14.574488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.248 [2024-07-24 20:52:14.574502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.248 [2024-07-24 20:52:14.577680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.248 [2024-07-24 20:52:14.587157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.248 [2024-07-24 20:52:14.587517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.587545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.587560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.587774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.587991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.588011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.588025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.591231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.600591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.600977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.601004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.601020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.601239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.601465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.601486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.601499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.604669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.614023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.614378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.614406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.614422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.614636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.614861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.614881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.614894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.618046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.627537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.627916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.627944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.627959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.628172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.628428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.628451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.628464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.631633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.640925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.641285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.641314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.641329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.641543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.641769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.641789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.641807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.644973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.654468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.654873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.654901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.654917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.655130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.655386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.655408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.655421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.658601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.667897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.668259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.668287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.668302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.668516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.668743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.668763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.668776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.671966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.681451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.681818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.681846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.681861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.682074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.682331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.682353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.682366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.685544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.694842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.695183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.695211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.695226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.695447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.695678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.695698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.695712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.698863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.249 [2024-07-24 20:52:14.708345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.249 [2024-07-24 20:52:14.708754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.249 [2024-07-24 20:52:14.708782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.249 [2024-07-24 20:52:14.708797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.249 [2024-07-24 20:52:14.709011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.249 [2024-07-24 20:52:14.709267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.249 [2024-07-24 20:52:14.709288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.249 [2024-07-24 20:52:14.709301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.249 [2024-07-24 20:52:14.712457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.250 [2024-07-24 20:52:14.721752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.250 [2024-07-24 20:52:14.722097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.250 [2024-07-24 20:52:14.722124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.250 [2024-07-24 20:52:14.722140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.250 [2024-07-24 20:52:14.722362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.250 [2024-07-24 20:52:14.722595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.250 [2024-07-24 20:52:14.722616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.250 [2024-07-24 20:52:14.722629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.250 [2024-07-24 20:52:14.725778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.250 [2024-07-24 20:52:14.735238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.250 [2024-07-24 20:52:14.735640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.250 [2024-07-24 20:52:14.735667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.250 [2024-07-24 20:52:14.735683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.250 [2024-07-24 20:52:14.735896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.250 [2024-07-24 20:52:14.736129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.250 [2024-07-24 20:52:14.736149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.250 [2024-07-24 20:52:14.736161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.250 [2024-07-24 20:52:14.739340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.250 [2024-07-24 20:52:14.748725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.250 [2024-07-24 20:52:14.749126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.250 [2024-07-24 20:52:14.749153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.250 [2024-07-24 20:52:14.749169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.250 [2024-07-24 20:52:14.749391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.250 [2024-07-24 20:52:14.749609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.250 [2024-07-24 20:52:14.749644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.250 [2024-07-24 20:52:14.749658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.250 [2024-07-24 20:52:14.752939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.250 [2024-07-24 20:52:14.762339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.250 [2024-07-24 20:52:14.762763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.250 [2024-07-24 20:52:14.762791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.250 [2024-07-24 20:52:14.762806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.250 [2024-07-24 20:52:14.763019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.250 [2024-07-24 20:52:14.763273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.250 [2024-07-24 20:52:14.763294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.250 [2024-07-24 20:52:14.763308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.250 [2024-07-24 20:52:14.766638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.250 [2024-07-24 20:52:14.776014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.250 [2024-07-24 20:52:14.776375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.250 [2024-07-24 20:52:14.776403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.250 [2024-07-24 20:52:14.776418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.250 [2024-07-24 20:52:14.776631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.250 [2024-07-24 20:52:14.776848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.250 [2024-07-24 20:52:14.776868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.250 [2024-07-24 20:52:14.776882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.250 [2024-07-24 20:52:14.780182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.250 [2024-07-24 20:52:14.789565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.250 [2024-07-24 20:52:14.789932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.250 [2024-07-24 20:52:14.789961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.250 [2024-07-24 20:52:14.789977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.250 [2024-07-24 20:52:14.790205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.250 [2024-07-24 20:52:14.790446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.250 [2024-07-24 20:52:14.790468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.250 [2024-07-24 20:52:14.790481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.250 [2024-07-24 20:52:14.793647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.250 [2024-07-24 20:52:14.802932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.250 [2024-07-24 20:52:14.803293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.250 [2024-07-24 20:52:14.803321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.250 [2024-07-24 20:52:14.803337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.250 [2024-07-24 20:52:14.803565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.250 [2024-07-24 20:52:14.803775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.250 [2024-07-24 20:52:14.803795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.250 [2024-07-24 20:52:14.803808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.250 [2024-07-24 20:52:14.807026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.509 [2024-07-24 20:52:14.816715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.509 [2024-07-24 20:52:14.817097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.509 [2024-07-24 20:52:14.817127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.509 [2024-07-24 20:52:14.817143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.509 [2024-07-24 20:52:14.817366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.509 [2024-07-24 20:52:14.817598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.817619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.817632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.820987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.830322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.830723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.830756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.830774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.831004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.831216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.831260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.831275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.834482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.843819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.844163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.844192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.844208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.844431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.844662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.844682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.844696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.847845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.857334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.857717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.857745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.857760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.857975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.858202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.858237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.858260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.861428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.870723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.871071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.871099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.871114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.871337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.871560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.871581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.871595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.874764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.884266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.884704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.884732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.884747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.884982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.885195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.885215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.885255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.888412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.897704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.898056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.898084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.898100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.898323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.898541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.898577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.898590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.901741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.911198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.911594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.911622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.911638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.911851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.912077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.912097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.912110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.915285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.924808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.925189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.925217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.925232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.925452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.925681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.925701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.925715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.928862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.938341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.938748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.938776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.510 [2024-07-24 20:52:14.938791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.510 [2024-07-24 20:52:14.939005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.510 [2024-07-24 20:52:14.939255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.510 [2024-07-24 20:52:14.939277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.510 [2024-07-24 20:52:14.939290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.510 [2024-07-24 20:52:14.942528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.510 [2024-07-24 20:52:14.951815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.510 [2024-07-24 20:52:14.952205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.510 [2024-07-24 20:52:14.952233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:14.952257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:14.952472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:14.952701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:14.952722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:14.952735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:14.955883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 [2024-07-24 20:52:14.965404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:14.965771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:14.965798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:14.965819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:14.966049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:14.966286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:14.966308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:14.966321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:14.969474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 [2024-07-24 20:52:14.978809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:14.979154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:14.979182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:14.979198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:14.979419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:14.979650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:14.979670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:14.979683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:14.982917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 [2024-07-24 20:52:14.992311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:14.992674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:14.992703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:14.992718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:14.992932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:14.993149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:14.993170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:14.993183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:14.996466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 20:52:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.511 20:52:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:19.511 20:52:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.511 20:52:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.511 20:52:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.511 [2024-07-24 20:52:15.005882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:15.006227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:15.006261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:15.006285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:15.006499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:15.006719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:15.006740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:15.006754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:15.010048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 [2024-07-24 20:52:15.019309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:15.019721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:15.019748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:15.019764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:15.019977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:15.020203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:15.020238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:15.020263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.511 [2024-07-24 20:52:15.023498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 [2024-07-24 20:52:15.028195] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.511 [2024-07-24 20:52:15.032793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:15.033162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:15.033189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:15.033204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:15.033426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:15.033655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:15.033675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:15.033688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:15.036832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.511 [2024-07-24 20:52:15.046495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:15.046853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:15.046880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:15.046896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:15.047124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:15.047364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:15.047394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:15.047408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:15.050694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 [2024-07-24 20:52:15.060087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.511 [2024-07-24 20:52:15.060785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.511 [2024-07-24 20:52:15.060831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.511 [2024-07-24 20:52:15.060851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.511 [2024-07-24 20:52:15.061089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.511 [2024-07-24 20:52:15.061333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.511 [2024-07-24 20:52:15.061355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.511 [2024-07-24 20:52:15.061372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.511 [2024-07-24 20:52:15.064532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.511 Malloc0 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.511 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.512 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.512 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.512 [2024-07-24 20:52:15.073841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.512 [2024-07-24 20:52:15.074269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.512 [2024-07-24 20:52:15.074303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.512 [2024-07-24 20:52:15.074320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.512 [2024-07-24 20:52:15.074552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.512 [2024-07-24 20:52:15.074764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.512 [2024-07-24 20:52:15.074783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.512 [2024-07-24 20:52:15.074797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.769 [2024-07-24 20:52:15.078256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.769 [2024-07-24 20:52:15.087565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.769 [2024-07-24 20:52:15.087926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.769 [2024-07-24 20:52:15.087955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6ac0 with addr=10.0.0.2, port=4420 00:24:19.769 [2024-07-24 20:52:15.087970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6ac0 is same with the state(5) to be set 00:24:19.769 [2024-07-24 20:52:15.088199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6ac0 (9): Bad file descriptor 00:24:19.769 [2024-07-24 20:52:15.088441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.769 [2024-07-24 20:52:15.088463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.769 [2024-07-24 20:52:15.088476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.769 [2024-07-24 20:52:15.088971] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.769 [2024-07-24 20:52:15.091772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.769 20:52:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1691504 00:24:19.769 [2024-07-24 20:52:15.100989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.769 [2024-07-24 20:52:15.139503] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:27.873 00:24:27.873 Latency(us) 00:24:27.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.873 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:27.873 Verification LBA range: start 0x0 length 0x4000 00:24:27.873 Nvme1n1 : 15.01 6194.28 24.20 10534.23 0.00 7626.78 618.95 19612.25 00:24:27.873 =================================================================================================================== 00:24:27.873 Total : 6194.28 24.20 10534.23 0.00 7626.78 618.95 19612.25 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.130 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.130 rmmod nvme_tcp 00:24:28.130 rmmod nvme_fabrics 00:24:28.387 rmmod nvme_keyring 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1692175 ']' 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1692175 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1692175 ']' 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1692175 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692175 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692175' 00:24:28.387 killing process with pid 1692175 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1692175 00:24:28.387 20:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1692175 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.645 20:52:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.544 20:52:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.544 00:24:30.544 real 0m23.349s 00:24:30.544 user 1m3.400s 00:24:30.544 sys 0m4.119s 00:24:30.544 20:52:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.544 20:52:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:30.544 ************************************ 00:24:30.544 END TEST nvmf_bdevperf 00:24:30.544 ************************************ 00:24:30.544 20:52:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:30.544 20:52:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.544 20:52:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.544 20:52:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.801 ************************************ 00:24:30.801 START TEST nvmf_target_disconnect 00:24:30.801 ************************************ 00:24:30.801 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:30.801 * Looking for test storage... 00:24:30.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.801 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.801 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:30.801 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.802 20:52:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.701 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:32.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:32.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:32.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:32.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.702 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:24:32.960 00:24:32.960 --- 10.0.0.2 ping statistics --- 00:24:32.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.960 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:32.960 00:24:32.960 --- 10.0.0.1 ping statistics --- 00:24:32.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.960 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:32.960 ************************************ 00:24:32.960 START TEST nvmf_target_disconnect_tc1 00:24:32.960 ************************************ 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:32.960 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.961 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.961 [2024-07-24 20:52:28.462788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.961 [2024-07-24 20:52:28.462858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a81a0 with addr=10.0.0.2, port=4420 00:24:32.961 [2024-07-24 20:52:28.462899] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:32.961 [2024-07-24 20:52:28.462925] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:32.961 [2024-07-24 20:52:28.462939] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:32.961 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:32.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:32.961 Initializing NVMe Controllers 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.961 00:24:32.961 real 0m0.094s 00:24:32.961 user 0m0.047s 00:24:32.961 sys 0m0.047s 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:32.961 ************************************ 00:24:32.961 END TEST nvmf_target_disconnect_tc1 00:24:32.961 ************************************ 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:32.961 ************************************ 00:24:32.961 START TEST nvmf_target_disconnect_tc2 00:24:32.961 ************************************ 00:24:32.961 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1695322 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1695322 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1695322 ']' 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.219 20:52:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:33.219 [2024-07-24 20:52:28.576303] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:33.219 [2024-07-24 20:52:28.576384] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.219 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.219 [2024-07-24 20:52:28.644992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.219 [2024-07-24 20:52:28.768009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.219 [2024-07-24 20:52:28.768079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.219 [2024-07-24 20:52:28.768096] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.220 [2024-07-24 20:52:28.768109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.220 [2024-07-24 20:52:28.768120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.220 [2024-07-24 20:52:28.768208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:33.220 [2024-07-24 20:52:28.768276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:33.220 [2024-07-24 20:52:28.768329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:33.220 [2024-07-24 20:52:28.768333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.179 Malloc0 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.179 [2024-07-24 20:52:29.570880] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.179 [2024-07-24 20:52:29.599125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1695481 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:34.179 20:52:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:34.179 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.078 20:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1695322 00:24:36.078 20:52:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 [2024-07-24 20:52:31.624819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 [2024-07-24 20:52:31.625174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Read completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.078 Write completed with error (sct=0, sc=8) 00:24:36.078 starting I/O failed 00:24:36.079 Write completed with error (sct=0, sc=8) 00:24:36.079 starting I/O failed 00:24:36.079 Write completed with error (sct=0, sc=8) 00:24:36.079 starting I/O failed 00:24:36.079 Read completed with error (sct=0, sc=8) 00:24:36.079 starting I/O failed 00:24:36.079 Read completed with error (sct=0, sc=8) 00:24:36.079 starting I/O failed 00:24:36.079 Read completed with error (sct=0, sc=8) 00:24:36.079 starting I/O failed 00:24:36.079 Write completed with error (sct=0, sc=8) 00:24:36.079 starting I/O failed 00:24:36.079 [2024-07-24 20:52:31.625498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:36.079 [2024-07-24 20:52:31.625683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.625712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.625877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.625903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.626022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.626048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.626154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.626180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.626314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.626341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.626477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.626502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.626656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.626682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.626810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.626836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.626993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.627030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.627223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.627257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.627394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.627421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.627537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.627571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.627689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.627714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.627853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.627879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.628139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.628167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.628314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.628344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.628486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.628511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.628654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.628680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.628828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.628854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.628994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.629020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.629187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.629213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.629336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.629362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.629479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.629506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.629640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.629667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.629812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.629838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.630012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.079 [2024-07-24 20:52:31.630040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.079 qpair failed and we were unable to recover it. 00:24:36.079 [2024-07-24 20:52:31.630192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.630218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.630333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.630359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.630471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.630497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.630623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.630649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.630881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.630906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.631031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.631059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.631253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.631285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.631405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.631430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.631532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.631558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.631778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.631807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.631964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.631991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.632170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.632199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.632359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.632385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.632491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.632516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.632625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.632650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.632758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.632783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.632952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.632977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.633132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.633161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.633319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.633346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.633486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.633512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.633647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.633672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.633776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.633802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.633906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.633931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.634039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.634063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.634163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.634188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.634317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.634345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.634482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.634508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.634617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.634642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.634758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.634784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.634892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.634922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.635060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.635085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.635251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.635278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.635410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.635436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.635589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.635627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.635804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.635830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.635971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.635996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.636159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.636185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.080 qpair failed and we were unable to recover it. 00:24:36.080 [2024-07-24 20:52:31.636292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.080 [2024-07-24 20:52:31.636319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.636428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.636454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.636588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.636615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.636811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.636835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.636958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.636998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.637141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.637166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.637339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.637365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.637498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.637524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.637667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.637701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.637887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.637912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.638028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.638069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.638211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.638413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.638438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.638541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.638567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.638761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.638789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.638938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.638974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.639086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.639111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.639239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.639272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.639376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.639401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.639549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.639577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.639736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.639766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.639923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.639949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.640052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.640077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.640224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.640267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.640422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.640448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.640619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.640644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.640777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.640804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.640956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.640993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.641136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.641162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.641309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.641348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.641462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.641488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.641635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.641661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.641797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.641828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.641991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.642016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.642122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.642157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.642292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.642318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.642444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.642469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.081 qpair failed and we were unable to recover it. 00:24:36.081 [2024-07-24 20:52:31.642593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.081 [2024-07-24 20:52:31.642620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.082 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.643728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.643758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.643918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.643945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.644087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.644113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.644251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.644277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.644386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.644411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.644540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.644568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.644709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.644734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.644877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.644903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.645049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.645075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.645216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.645256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.645439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.645465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.645604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.645629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.645760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.645785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.645941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.645969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.646196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.646221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.646342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.646368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.646506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.646531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.646669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.646694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.646823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.646848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.646989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.647014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.647170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.647195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.647456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.647482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.647620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.647648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.647826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.647854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.648015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.648040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.648151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.648176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.648312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.648338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.648476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.648502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.648717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.648742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.648877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.648902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.361 qpair failed and we were unable to recover it. 00:24:36.361 [2024-07-24 20:52:31.649036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-07-24 20:52:31.649065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.649216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.649261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.649392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.649419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.649566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.649594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.649755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.649780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.649913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.649942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.650049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.650074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.650211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.650255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.650363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.650388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.650496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.650520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.650657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.650681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.650788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.650815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.650936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.650961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.651071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.651097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.651199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.651223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.651329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.651355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.651493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.651518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.651659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.651685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.651822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.651847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.651997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.652036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.652206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.652233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.652378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.652404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.652542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.652567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.652703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.652728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.652874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.652917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.653094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.653123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.653274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.653301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.653412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.653437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.653538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.653568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.653728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.653753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.653861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.653886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.654056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.654099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.654253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.654285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.654420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-07-24 20:52:31.654445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.362 qpair failed and we were unable to recover it. 00:24:36.362 [2024-07-24 20:52:31.654587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.654612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.654805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.654864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.655012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.655042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.655218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.655256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.655379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.655404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.655521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.655548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.655715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.655740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.655874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.655899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.656005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.656032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.656139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.656164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.656310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.656338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.656441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.656466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.656595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.656622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.656797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.656823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.656955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.656982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.657170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.657201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.657323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.657350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.657512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.657537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.657664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.657690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.657855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.657881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.658016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.658043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.658223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.658255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.658415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.658441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.658554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.658579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.658711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.658736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.658912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.658938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.659040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.659065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.659240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.659288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.659419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.659445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.659624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.659652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.659800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.659828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.659969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.659997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.660136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.660165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.660328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.660355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.660486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.363 [2024-07-24 20:52:31.660511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.363 qpair failed and we were unable to recover it. 00:24:36.363 [2024-07-24 20:52:31.660669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.660695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.660857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.660883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.661027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.661052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.661204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.661234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.661378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.661403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.661626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.661678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.661835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.661876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.661981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.662006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.662120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.662146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.662280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.662306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.662468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.662493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.662631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.662657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.662766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.662792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.662953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.662978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.663113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.663138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.663252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.663278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.663417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.663443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.663555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.663581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.663697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.663723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.663858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.663885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.664019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.664044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.664206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.664232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 [2024-07-24 20:52:31.664423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.364 [2024-07-24 20:52:31.664461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.364 qpair failed and we were unable to recover it. 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Read completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.364 Write completed with error (sct=0, sc=8) 00:24:36.364 starting I/O failed 00:24:36.365 [2024-07-24 20:52:31.664785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:36.365 [2024-07-24 20:52:31.665005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.665059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.665212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.665246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.665381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.665407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.665565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.665591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.665726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.665752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.665889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.665915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.666039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.666084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.666232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.666271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.666403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.666429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.666560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.666586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.666738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.666781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.666979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.667005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.667142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.667168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.667298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.667327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.667509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.667555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.667736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.667779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.667905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.667947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.668079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.668104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.668272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.668298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.668421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.668468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.668616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.668657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.668803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.668848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.668954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.668980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.669106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.669131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.669271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.669297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.669432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.669458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.669595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.669620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.669764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.669790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.669900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.669927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.670055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.670081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.670248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.670275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.670426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.365 [2024-07-24 20:52:31.670470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.365 qpair failed and we were unable to recover it. 00:24:36.365 [2024-07-24 20:52:31.670654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.670698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.670870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.670896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.671025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.671050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.671206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.671232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.671347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.671373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.671477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.671502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.671662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.671704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.671857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.671899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.672009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.672039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.672200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.672226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.672366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.672392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.672503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.672530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.672656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.672699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.672847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.672874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.672984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.673009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.673142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.673167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.673297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.673323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.673435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.673461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.673649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.673674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.673814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.673839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.673947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.673972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.674126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.674153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.674347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.674373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.366 [2024-07-24 20:52:31.674479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.366 [2024-07-24 20:52:31.674504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.366 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.674765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.674823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.674959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.674987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.675128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.675156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.675290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.675315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.675423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.675448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.675585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.675610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.675717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.675742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.675868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.675893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.676022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.676049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.676185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.676210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.676333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.676359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.676472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.676502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.676648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.676673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.676810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.676836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.676967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.677008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.677164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.677192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.677331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.677358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.677473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.677499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.677655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.677681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.677806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.677859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.678015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.678040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.678163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.678188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.678343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.678369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.678467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.678491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.678606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.678631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.678780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.678805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.678955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.678983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.679220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.679260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.679396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.679423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.679571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.679599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.679774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.679799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.679956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.679984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.680099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.680128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.680282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.680308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.367 [2024-07-24 20:52:31.680440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.367 [2024-07-24 20:52:31.680466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.367 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.680580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.680605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.680739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.680763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.680895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.680920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.681035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.681064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.681212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.681254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.681387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.681412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.681528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.681554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.681717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.681743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.681845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.681870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.682003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.682028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.682176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.682204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.682340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.682366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.682497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.682522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.682683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.682708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.682850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.682875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.683037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.683062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.683186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.683211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.683314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.683343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.683454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.683479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.683616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.683642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.683831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.683859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.684041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.684066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.684224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.684264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.684377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.684402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.684509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.684534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.684638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.684663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.684791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.684816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.684950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.684977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.685150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.685175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.685332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.685358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.685495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.685520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.685644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.685670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.685817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.368 [2024-07-24 20:52:31.685845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.368 qpair failed and we were unable to recover it. 00:24:36.368 [2024-07-24 20:52:31.685991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.686024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.686152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.686177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.686298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.686323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.686461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.686485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.686618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.686642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.686753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.686777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.686910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.686937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.687122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.687149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.687429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.687454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.687596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.687621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.687786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.687810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.687957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.687986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.688149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.688177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.688373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.688399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.688560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.688584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.688734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.688761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.688971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.688999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.689142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.689171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.689356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.689382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.689517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.689543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.689696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.689724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.689878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.689902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.690057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.690082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.690221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.690251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.690416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.690459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.690619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.690643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.690778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.690802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.690995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.691021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.691165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.691190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.691340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.691364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.691520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.691560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.691667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.691693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.691829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.369 [2024-07-24 20:52:31.691853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.369 qpair failed and we were unable to recover it. 00:24:36.369 [2024-07-24 20:52:31.692008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.692038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.692167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.692192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.692329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.692370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.692485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.692513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.692702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.692727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.692863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.692905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.693034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.693062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.693261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.693289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.693428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.693452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.693617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.693645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.693802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.693828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.693974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.694001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.694184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.694208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.694388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.694414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.694558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.694586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.694733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.694761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.694911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.694936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.695088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.695115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.695308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.695333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.695441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.695470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.695637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.695662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.695795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.370 [2024-07-24 20:52:31.695821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.370 qpair failed and we were unable to recover it. 00:24:36.370 [2024-07-24 20:52:31.695954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.695980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.696116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.696140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.696303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.696328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.696491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.696516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.696658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.696683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.696787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.696811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.696949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.696974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.697083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.697107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.697303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.697332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.697464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.697488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.697638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.697663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.697825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.697853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.697979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.698005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.698167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.698207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.698352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.698377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.698506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.698532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.698691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.698718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.698831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.698859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.698982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.699007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.699175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.699205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.699364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.699389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.699521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.699545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.699661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.699686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.699792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.699817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.699949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.699977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.700114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.700139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.700251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.700277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.700410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.700434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.700543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.700569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.700713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.371 [2024-07-24 20:52:31.700738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.371 qpair failed and we were unable to recover it. 00:24:36.371 [2024-07-24 20:52:31.700871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.700897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.701029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.701054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.701185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.701210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.701321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.701347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.701457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.701482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.701689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.701717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.701867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.701893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.702041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.702069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.702216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.702261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.702414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.702439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.702571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.702597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.702734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.702760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.702934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.702959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.703094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.703119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.703250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.703275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.703404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.703428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.703591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.372 [2024-07-24 20:52:31.703619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.372 qpair failed and we were unable to recover it. 00:24:36.372 [2024-07-24 20:52:31.703780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.703805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.703939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.703964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.704066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.704092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.704255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.704284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.704434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.704461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.704618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.704646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.704751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.704779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.704936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.704960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.705097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.705139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.705291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.705319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.705460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.705485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.705588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.705613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.705726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.705751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.705858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.705883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.706014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.706040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.706170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.706195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.706339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.706365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.706494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.706519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.706631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.706660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.706758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.706783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.706908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.706933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.707093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.707121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.707255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.707281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.707438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.707463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.707658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.707683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.707795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.707820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.707931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.707957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.708067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.373 [2024-07-24 20:52:31.708092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.373 qpair failed and we were unable to recover it. 00:24:36.373 [2024-07-24 20:52:31.708233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.708263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.708397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.708422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.708529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.708554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.708715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.708739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.708876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.708901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.709066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.709091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.709228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.709266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.709362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.709387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.709538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.709566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.709720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.709745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.709852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.709877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.709999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.710041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.710142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.710166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.710327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.710353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.710463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.710489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.710627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.710652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.710790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.710815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.710940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.710969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.711097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.711122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.711257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.711283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.711419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.711444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.711586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.711610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.374 qpair failed and we were unable to recover it. 00:24:36.374 [2024-07-24 20:52:31.711767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.374 [2024-07-24 20:52:31.711792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.711952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.711979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.712128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.712153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.712291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.712316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.712453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.712478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.712614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.712639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.712745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.712770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.712927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.712954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.713115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.713139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.713255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.713281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.713408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.713433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.713568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.713593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.713724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.713749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.713907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.713935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.714071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.714097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.714224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.714254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.714390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.714415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.714573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.714598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.714733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.714758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.714916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.714941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.715075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.715100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.715206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.715231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.715430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.715455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.715582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.715607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.715737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.715762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.715963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.715988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.716119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.716144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.716280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.716323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.716496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.375 [2024-07-24 20:52:31.716524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.375 qpair failed and we were unable to recover it. 00:24:36.375 [2024-07-24 20:52:31.716670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.716695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.716829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.716854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.716988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.717013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.717174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.717198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.717367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.717397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.717522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.717550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.717688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.717713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.717849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.717877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.718050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.718075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.718182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.718207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.718341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.718367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.718475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.718500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.718611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.718635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.718770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.718795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.718976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.719004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.719179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.719204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.719351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.719376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.719489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.719513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.719651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.719675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.719772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.719797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.719935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.719963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.720137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.720165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.720350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.720375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.720478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.720503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.720642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.376 [2024-07-24 20:52:31.720667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.376 qpair failed and we were unable to recover it. 00:24:36.376 [2024-07-24 20:52:31.720779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.720805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.720946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.720971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.721099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.721124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.721274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.721299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.721462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.721487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.721626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.721652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.721786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.721811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.721948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.721973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.722133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.722158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.722294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.722319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.722439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.722464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.722581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.722605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.722766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.722791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.722936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.722964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.723085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.723109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.723249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.723274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.723483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.723508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.723644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.723669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.723834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.723861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.723983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.724011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.724164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.724189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.724303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.724329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.724504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.724530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.377 qpair failed and we were unable to recover it. 00:24:36.377 [2024-07-24 20:52:31.724665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.377 [2024-07-24 20:52:31.724690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.724823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.724848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.724983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.725008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.725133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.725158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.725296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.725321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.725511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.725536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.725672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.725697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.725807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.725834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.725994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.726019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.726152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.726177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.726362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.726391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.726497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.726524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.726663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.726688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.726852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.726876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.727039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.727067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.727248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.727274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.727387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.727413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.727547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.727572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.727710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.727735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.727836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.727861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.727996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.728022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.728157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.728183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.728301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.728345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.728483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.728511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.728700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.728725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.728852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.728877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.729024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.729049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.378 qpair failed and we were unable to recover it. 00:24:36.378 [2024-07-24 20:52:31.729156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.378 [2024-07-24 20:52:31.729185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.729301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.729328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.729454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.729480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.729610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.729635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.729762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.729787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.729958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.729986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.730137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.730162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.730302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.730328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.730528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.730553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.730690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.730716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.730853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.730878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.731023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.731064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.731188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.731213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.731396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.731423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.731594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.731619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.731777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.731803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.731912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.731937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.732088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.732116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.732274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.732300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.732458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.732499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.732645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.732673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.732835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.732860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.733000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.733041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.733155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.733183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.733340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.733366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.733488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.733513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.733647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.733672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.733806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.733832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.733996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.379 [2024-07-24 20:52:31.734021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.379 qpair failed and we were unable to recover it. 00:24:36.379 [2024-07-24 20:52:31.734154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.734179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.734387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.734413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.734515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.734556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.734714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.734740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.734873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.734899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.735048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.735073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.735207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.735233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.735372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.735398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.735574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.735602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.735747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.735777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.735930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.735955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.736130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.736158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.736298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.736331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.736513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.736538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.736647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.736689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.736836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.736865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.737041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.737066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.737177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.737202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.737391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.737416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.737554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.737579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.737694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.737719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.737898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.737926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.738055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.738080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.738218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.738249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.738391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.738416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.738549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.738574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.738733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.738759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.738859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.738884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.739020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.739045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.739179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.739204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.739347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.739372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.739499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.739524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.739634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.380 [2024-07-24 20:52:31.739660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.380 qpair failed and we were unable to recover it. 00:24:36.380 [2024-07-24 20:52:31.739792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.739817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.739947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.739972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.740073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.740098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.740252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.740280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.740408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.740433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.740600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.740624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.740750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.740779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.740952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.740977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.741081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.741106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.741237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.741279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.741407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.741432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.741592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.741619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.741771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.741796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.741927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.741952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.742080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.742106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.742236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.742284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.742425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.742451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.742586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.742612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.742774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.742802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.742988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.743013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.743143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.743168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.743299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.743328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.743485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.743510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.743647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.743687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.743830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.743858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.743983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.744007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.744120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.744328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.744357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.744492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.744518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.744650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.744675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.744798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.744826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.744976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.745002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.381 [2024-07-24 20:52:31.745108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.381 [2024-07-24 20:52:31.745134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.381 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.745314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.745342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.745494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.745519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.745661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.745686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.745788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.745812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.745946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.745971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.746104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.746129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.746247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.746273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.746436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.746461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.746598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.746623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.746759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.746784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.746915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.746940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.747099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.747124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.747263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.747305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.747470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.747495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.747640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.747673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.747815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.747842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.747991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.748016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.748145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.748171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.748325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.748355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.748531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.748556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.748709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.748737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.748885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.748915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.749068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.749094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.749193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.749218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.749399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.749424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.749558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.749583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.749726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.749751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.749881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.749906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.750055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.750080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.750251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.750276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.750376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.750400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.750541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.750567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.750698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.750723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.750862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.750887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.382 qpair failed and we were unable to recover it. 00:24:36.382 [2024-07-24 20:52:31.751018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.382 [2024-07-24 20:52:31.751043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.751171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.751213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.751344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.751370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.751470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.751495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.751623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.751648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.751831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.751856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.751990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.752014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.752141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.752170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.752329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.752359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.752473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.752498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.752631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.752656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.752817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.752842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.753044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.753069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.753197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.753222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.753351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.753376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.753478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.753503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.753603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.753628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.753762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.753788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.753926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.753951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.754084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.754109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.754293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.754322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.754459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.754485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.754643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.754687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.754832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.754862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.754993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.755019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.755145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.755170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.755276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.755301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.383 [2024-07-24 20:52:31.755431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.383 [2024-07-24 20:52:31.755457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.383 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.755614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.755641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.755784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.755812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.755994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.756019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.756163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.756191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.756343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.756369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.756500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.756525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.756634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.756659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.756769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.756793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.756923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.756948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.757061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.757086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.757253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.757282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.757433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.757458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.757585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.757610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.757735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.757760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.757952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.757977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.758106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.758131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.758253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.758278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.758408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.758433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.758540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.758565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.758726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.758754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.758914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.758943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.759049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.759074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.759188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.759213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.759345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.759370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.759487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.759512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.759675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.384 [2024-07-24 20:52:31.759704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.384 qpair failed and we were unable to recover it. 00:24:36.384 [2024-07-24 20:52:31.759854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.759879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.759972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.759997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.760144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.760172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.760299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.760325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.760465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.760491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.760637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.760661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.760796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.760821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.760978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.761004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.761163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.761205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.761351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.761376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.761515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.761540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.761682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.761711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.761842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.761867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.761972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.761998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.762175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.762203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.762358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.762383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.762517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.762542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.762641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.762666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.762791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.762816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.762954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.762996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.763119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.763146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.763271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.763300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.763432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.763457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.763607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.763635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.763786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.763811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.763936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.763961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.764087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.764114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.764260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.764286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.764416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.764458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.764573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.764601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.385 [2024-07-24 20:52:31.764743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.385 [2024-07-24 20:52:31.764768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.385 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.764878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.764903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.765046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.765072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.765183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.765209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.765349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.765375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.765563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.765602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.765742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.765769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.765874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.765900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.766036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.766061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.766190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.766215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.766362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.766389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.766496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.766538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.766696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.766721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.766896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.766925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.767040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.767069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.767191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.767216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.767354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.767379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.767533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.767561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.767718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.767749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.767886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.767912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.768096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.768124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.768282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.768310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.768470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.768496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.768657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.768697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.768832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.768858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.768997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.769022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.769173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.769201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.769351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.769377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.769503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.769529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.769701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.769729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.769861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.769886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.770022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.770047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.770212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.770247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.770430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.770456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.386 [2024-07-24 20:52:31.770629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.386 [2024-07-24 20:52:31.770657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.386 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.770801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.770830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.770961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.770987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.771117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.771141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.771311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.771338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.771449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.771474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.771598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.771623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.771735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.771761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.771862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.771888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.772021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.772047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.772218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.772268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.772445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.772484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.772636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.772664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.772802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.772829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.772991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.773033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.773196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.773223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.773354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.773392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.773535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.773562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.773674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.773700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.773854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.773882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.774114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.774140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.774297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.774323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.774463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.774489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.774613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.774641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.774772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.774802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.774954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.774982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.775123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.775151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.775332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.775359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.775522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.775550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.387 [2024-07-24 20:52:31.775699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.387 [2024-07-24 20:52:31.775727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.387 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.775925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.775968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.776101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.776126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.776267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.776295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.776443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.776487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.776592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.776619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.776812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.776856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.777012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.777040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.777218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.777265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.777394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.777432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.777545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.777589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.777735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.777763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.777914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.777963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.778112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.778141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.778280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.778306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.778435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.778461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.778648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.778676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.778845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.778898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.779027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.779056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.779199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.779227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.779382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.779407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.779536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.779561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.779696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.779726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.779880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.779908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.780018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.780046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.780183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.780211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.780357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.780396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.780536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.780563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.780708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.780752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.780915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.780959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.781096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.781122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.781234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.781268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.781449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.781492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.781643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.781685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.388 qpair failed and we were unable to recover it. 00:24:36.388 [2024-07-24 20:52:31.781864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.388 [2024-07-24 20:52:31.781907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.782009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.782036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.782153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.782179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.782303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.782332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.782522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.782552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.782695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.782723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.782868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.782896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.783021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.783046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.783201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.783226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.783393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.783418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.783573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.783601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.783730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.783756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.783942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.783970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.784106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.784134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.784319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.784345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.784459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.784485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.784632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.784660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.784856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.784914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.785057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.785084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.785213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.785239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.785417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.785442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.785604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.785632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.785831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.785860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.786008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.786036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.786207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.786235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.786403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.786429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.786531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.786573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.786719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.786748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.786954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.786990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.787121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.787146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.389 [2024-07-24 20:52:31.787264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.389 [2024-07-24 20:52:31.787290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.389 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.787426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.787453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.787607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.787635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.787779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.787807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.787935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.787978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.788124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.788152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.788317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.788343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.788453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.788478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.788643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.788668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.788826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.788856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.788997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.789025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.789195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.789222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.789388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.789414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.789510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.789536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.789669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.789694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.789883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.789911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.790050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.790078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.790197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.790226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.790386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.790412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.790525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.790551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.790666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.790692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.790856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.790885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.790999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.791027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.791199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.791228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.791391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.791416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.791572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.791611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.791778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.791823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.390 qpair failed and we were unable to recover it. 00:24:36.390 [2024-07-24 20:52:31.792006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.390 [2024-07-24 20:52:31.792049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.792144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.792170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.792308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.792334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.792458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.792500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.792681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.792723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.792904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.792948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.793085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.793111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.793248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.793274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.793399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.793428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.793570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.793614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.793737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.793765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.793882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.793914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.794024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.794049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.794159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.794185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.794345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.794371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.794526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.794555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.794720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.794763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.794897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.794922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.795085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.795110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.795249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.795275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.795422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.795465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.795644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.795689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.795808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.795836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.795989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.796015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.796120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.796146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.796282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.796308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.796414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.796439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.796539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.796565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.796692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.796718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.796845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.796870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.796986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.797012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.797171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.797209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.797358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.797386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.797546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.797572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.391 qpair failed and we were unable to recover it. 00:24:36.391 [2024-07-24 20:52:31.797733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.391 [2024-07-24 20:52:31.797759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.797886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.797928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.798080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.798110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.798339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.798368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.798519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.798547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.798710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.798738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.798886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.798916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.799061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.799089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.799233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.799268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.799435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.799460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.799636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.799665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.799819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.799861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.800008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.800036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.800185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.800213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.800354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.800380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.800487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.800512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.800620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.800645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.800785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.800815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.800944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.800972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.801144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.801172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.801333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.801359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.801487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.801512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.801649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.801674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.801830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.801855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.801982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.802012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.802187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.802215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.802398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.802423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.802623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.802673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.802818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.802846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.802992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.803019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.803174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.803200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.803346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.392 [2024-07-24 20:52:31.803372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.392 qpair failed and we were unable to recover it. 00:24:36.392 [2024-07-24 20:52:31.803475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.803500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.803635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.803660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.803839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.803867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.803995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.804023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.804228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.804287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.804427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.804454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.804598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.804625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.804774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.804818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.804970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.805014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.805175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.805201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.805341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.805367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.805526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.805572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.805726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.805756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.805931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.805974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.806133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.806159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.806342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.806386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.806537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.806580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.806709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.806753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.806868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.806893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.807035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.807061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.807161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.807186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.807338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.807383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.807538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.807566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.807731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.807775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.807895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.807924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.808048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.808073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.808211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.808236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.808368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.808411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.808535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.808563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.808713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.808738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.808870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.808897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.809033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.809059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.809154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.809179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.809306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.809335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.809504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.809548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.393 qpair failed and we were unable to recover it. 00:24:36.393 [2024-07-24 20:52:31.809698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.393 [2024-07-24 20:52:31.809741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.809841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.809867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.810010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.810035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.810173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.810199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.810360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.810405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.810586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.810629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.810755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.810798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.810932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.810958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.811071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.811097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.811264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.811291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.811470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.811517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.811699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.811742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.811848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.811874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.812011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.812037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.812195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.812220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.812364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.812407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.812560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.812590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.812715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.812746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.812930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.812958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.813108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.813136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.813254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.813295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.813428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.813456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.813571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.813599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.813740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.813768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.813950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.813978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.814095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.814123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.814262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.814304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.814438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.814464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.814568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.814594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.814703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.814727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.814888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.814936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.815064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.815107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.815215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.815249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.815403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.815447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.815626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.815668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.394 [2024-07-24 20:52:31.815821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.394 [2024-07-24 20:52:31.815865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.394 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.816019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.816060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.816161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.816186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.816349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.816395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.816517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.816575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.816733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.816775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.816914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.816957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.817092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.817118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.817260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.817286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.817437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.817481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.817629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.817671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.817852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.817900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.818009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.818035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.818167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.818193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.818375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.818404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.818549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.818593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.818727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.818754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.818889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.818914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.819041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.819066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.819165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.819192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.819351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.819398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.819523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.819567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.819693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.819735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.819878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.819905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.820018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.820045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.820178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.820204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.820364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.820409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.820535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.820565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.820742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.820774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.820923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.820951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.821088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.821115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.821346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.395 [2024-07-24 20:52:31.821375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.395 qpair failed and we were unable to recover it. 00:24:36.395 [2024-07-24 20:52:31.821521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.821550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.821668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.821696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.821816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.821844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.821964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.821992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.822149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.822177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.822339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.822367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.822491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.822539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.822688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.822731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.822882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.822925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.823061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.823086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.823212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.823237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.823367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.823410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.823559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.823601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.823712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.823756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.823944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.823972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.824124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.824149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.824252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.824278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.824406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.824458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.824640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.824669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.824841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.824884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.825014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.825041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.825151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.825176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.825328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.825357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.825477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.825505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.825652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.825680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.825802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.825830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.825970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.825999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.826139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.826167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.826329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.826356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.826507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.826534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.826684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.826712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.826889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.826918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.827100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.827128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.827238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.827272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.827418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.827443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.827574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.396 [2024-07-24 20:52:31.827601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.396 qpair failed and we were unable to recover it. 00:24:36.396 [2024-07-24 20:52:31.827758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.827786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.827900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.827928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.828067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.828094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.828198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.828226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.828383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.828408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.828517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.828542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.828647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.828672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.828802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.828827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.828991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.829019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.829162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.829190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.829341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.829366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.829495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.829520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.829646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.829688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.829868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.829896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.830043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.830070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.830195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.830224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.830366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.830391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.830547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.830573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.830727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.830755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.830865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.830893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.831037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.831065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.831188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.831217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.831402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.831428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.831556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.831582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.831713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.831741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.831911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.831940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.832080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.832108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.832228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.832261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.832377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.832403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.832561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.832586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.832761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.832789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.832907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.832934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.833056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.833084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.833257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.833300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.833398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.833423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.833579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.833605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.833758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.833786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.833951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.833976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.397 qpair failed and we were unable to recover it. 00:24:36.397 [2024-07-24 20:52:31.834145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.397 [2024-07-24 20:52:31.834170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.834334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.834363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.834535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.834563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.834721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.834746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.834873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.834898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.835053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.835081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.835263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.835289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.835436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.835465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.835582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.835610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.835736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.835761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.835896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.835922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.836082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.836114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.836269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.836295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.836473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.836501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.836651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.836680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.836860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.836885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.837040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.837068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.837216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.837253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.837375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.837400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.837530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.837555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.837668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.837696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.837876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.837901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.838010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.838050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.838203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.838230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.838386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.838411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.838524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.838565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.838739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.838767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.838892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.838917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.839051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.839076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.839212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.839236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.839356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.839381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.839511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.839552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.839706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.839735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.839883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.839909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.840009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.840034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.840160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.840188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.840356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.840382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.840483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.398 [2024-07-24 20:52:31.840508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.398 qpair failed and we were unable to recover it. 00:24:36.398 [2024-07-24 20:52:31.840698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.840723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.840858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.840883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.840974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.840999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.841126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.841154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.841304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.841330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.841476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.841500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.841656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.841684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.841832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.841857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.841988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.842028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.842140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.842169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.842328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.842354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.842496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.842539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.842683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.842710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.842836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.842861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.842994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.843022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.843151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.843176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.843269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.843295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.843390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.843415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.843567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.843595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.843721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.843745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.843879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.843904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.844030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.844057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.844178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.844203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.844350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.844376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.844508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.844536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.844663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.844688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.844824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.844850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.844954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.844979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.845114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.845139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.845257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.845298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.845466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.845494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.845613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.845639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.845813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.845842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.845989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.399 [2024-07-24 20:52:31.846017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.399 qpair failed and we were unable to recover it. 00:24:36.399 [2024-07-24 20:52:31.846202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.846227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.846342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.846385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.846499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.846527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.846677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.846702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.846845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.846886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.847036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.847061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.847192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.847217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.847384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.847416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.847603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.847629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.847763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.847788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.847918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.847960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.848131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.848159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.848310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.848336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.848493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.848518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.848632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.848657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.848793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.848818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.848945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.848988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.849108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.849136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.849291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.849317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.849488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.849515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.849639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.849668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.849855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.849880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.850027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.850055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.850198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.850226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.850361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.850386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.850491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.850516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.850685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.850713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.850860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.850884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.851041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.851066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.851218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.851252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.851376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.851402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.851549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.851575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.851734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.851762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.851888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.851914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.852014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.852039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.852222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.852257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.852396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.400 [2024-07-24 20:52:31.852422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.400 qpair failed and we were unable to recover it. 00:24:36.400 [2024-07-24 20:52:31.852558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.852584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.852762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.852790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.852966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.852991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.853166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.853194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.853353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.853382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.853503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.853528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.853666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.853692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.853828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.853853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.854022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.854048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.854154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.854179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.854289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.854315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.854455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.854485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.854617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.854642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.854781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.854807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.854974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.855000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.855109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.855134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.855260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.855302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.855404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.855430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.855556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.855581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.855711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.855738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.855890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.855915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.856086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.856114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.856229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.856262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.856389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.856414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.856527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.856552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.856717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.856745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.856875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.856900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.857006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.857030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.857165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.857191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.857318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.857343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.857447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.857472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.857635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.857664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.857847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.857873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.858025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.858053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.858174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.858202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.858360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.858385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.858521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.858547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.858683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.858709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.858888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.858917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.859017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.859043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.401 [2024-07-24 20:52:31.859234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.401 [2024-07-24 20:52:31.859268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.401 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.859415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.859440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.859592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.859620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.859770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.859798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.859970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.859995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.860145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.860173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.860321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.860349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.860480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.860505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.860645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.860670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.860828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.860856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.861017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.861042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.861204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.861229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.861387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.861430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.861596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.861622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.861777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.861819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.861930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.861958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.862110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.862135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.862320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.862349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.862473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.862502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.862651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.862677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.862813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.862854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.863006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.863033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.863152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.863194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.863377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.863403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.863552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.863580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.863755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.863780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.863957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.863986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.864181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.864209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.864365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.864390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.864550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.864576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.864676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.864702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.864833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.864858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.865013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.865038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.865147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.865175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.865299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.865325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.865441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.865466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.865599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.865625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.865730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.865756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.865899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.865924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.866036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.866066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.402 qpair failed and we were unable to recover it. 00:24:36.402 [2024-07-24 20:52:31.866165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.402 [2024-07-24 20:52:31.866190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.866330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.866356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.866541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.866569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.866721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.866746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.866896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.866923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.867032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.867060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.867239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.867268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.867374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.867416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.867539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.867567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.867737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.867762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.867896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.867921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.868028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.868053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.868183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.868209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.868315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.868341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.868477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.868502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.868600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.868625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.868729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.868754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.868934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.868962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.869086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.869113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.869239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.869270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.869435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.869463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.869622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.869648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.869758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.869784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.869973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.870001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.870179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.870205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.870343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.870369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.870528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.870560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.870741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.870766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.870911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.870939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.871081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.871109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.871286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.871313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.871416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.871440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.871546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.871571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.871698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.871723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.871837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.871862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.871965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.871990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.872116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.872141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.872296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.872324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.872503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.872531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.403 [2024-07-24 20:52:31.872679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.403 [2024-07-24 20:52:31.872704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.403 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.872858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.872886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.873062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.873090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.873262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.873288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.873424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.873449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.873556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.873581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.873711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.873738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.873868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.873911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.874054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.874082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.874227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.874257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.874385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.874428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.874558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.874585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.874713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.874738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.874838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.874864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.874994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.875019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.875151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.875176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.875291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.875317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.875470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.875512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.875616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.875642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.875768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.875794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.875900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.875926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.876063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.876088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.876198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.876224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.876367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.876393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.876530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.876555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.876662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.876687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.876804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.876828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.876954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.876979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.877107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.877152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.877308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.877336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.877462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.877487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.877625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.877650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.877783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.877807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.404 [2024-07-24 20:52:31.877918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.404 [2024-07-24 20:52:31.877943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.404 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.878044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.878070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.878184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.878208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.878325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.878351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.878452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.878477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.878603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.878631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.878757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.878782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.878919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.878944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.879047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.879072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.879187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.879213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.879312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.879338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.879432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.879458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.879594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.879620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.879757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.879782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.879884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.879909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.880048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.880072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.880173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.880198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.880325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.880351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.880485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.880511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.880655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.880680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.880816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.880841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.881013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.881038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.881167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.881192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.881305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.881333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.881508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.881533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.881627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.881652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.881761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.881786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.881907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.881932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.882039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.882064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.882192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.882220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.882357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.882383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.882486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.882512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.882645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.882669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.882829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.882854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.882963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.882988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.883101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.883127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.883286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.883324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.883444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.883482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.883668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.883723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.883892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.883922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.884070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.884098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.884254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.884299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.884411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.884436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.405 [2024-07-24 20:52:31.884569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.405 [2024-07-24 20:52:31.884611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.405 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.884762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.884789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.884905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.884932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.885105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.885133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.885297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.885323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.885429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.885455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.885590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.885616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.885775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.885800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.885934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.885962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.886083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.886110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.886254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.886280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.886408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.886434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.886562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.886589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.886746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.886774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.886911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.886936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.887116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.887144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.887266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.887309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.887414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.887439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.887546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.887571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.887695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.887720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.887853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.887885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.888040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.888068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.888176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.888204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.888335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.888361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.888474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.888499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.888612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.888637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.888802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.888830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.888973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.889001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.889110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.889138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.889315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.889348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.889452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.889477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.889590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.889615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.889731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.889759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.889906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.889933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.890045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.890073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.890222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.890254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.890390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.890415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.890544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.890569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.890698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.890722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.890854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.890879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.890994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.891019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.891205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.891233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.891395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.891420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.891531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.891556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.406 qpair failed and we were unable to recover it. 00:24:36.406 [2024-07-24 20:52:31.891676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.406 [2024-07-24 20:52:31.891703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.891859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.891887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.892035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.892063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.892218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.892253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.892369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.892394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.892531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.892556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.892720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.892745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.892876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.892901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.893028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.893053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.893186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.893214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.893367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.893393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.893520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.893545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.893660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.893686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.893843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.893867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.893974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.893999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.894146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.894171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.894298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.894324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.894512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.894555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.894692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.894719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.894827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.894852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.894968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.894993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.895131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.895156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.895281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.895308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.895420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.895446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.895553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.895579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.895756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.895784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.895913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.895941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.896064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.896091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.896190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.896215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.896334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.896361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.896502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.896532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.896640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.896665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.896797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.896822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.896957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.896982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.897093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.897119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.897223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.897255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.897367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.897393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.897523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.897549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.897708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.897733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.897858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.897883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.898020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.898044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.898176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.898203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.898317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.898343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.898481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.898506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.407 qpair failed and we were unable to recover it. 00:24:36.407 [2024-07-24 20:52:31.898640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.407 [2024-07-24 20:52:31.898666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.898785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.898809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.898914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.898939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.899065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.899090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.899227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.899261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.899383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.899409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.899509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.899535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.899669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.899694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.899811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.899836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.899976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.900001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.900129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.900154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.900284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.900310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.900441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.900466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.900563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.900593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.900721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.900746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.900900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.900928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.901064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.901089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.901255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.901281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.901421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.901446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.901573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.901599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.901740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.901765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.901922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.901963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.902106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.902131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.902248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.902274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.902389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.902415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.902541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.902568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.902710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.902744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.902865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.902891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.903008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.903033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.903159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.903187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.903365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.903392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.903508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.903533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.903665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.903692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.903806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.903833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.903968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.903994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.904135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.904163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.408 [2024-07-24 20:52:31.904277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.408 [2024-07-24 20:52:31.904309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.408 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.904470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.904497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.904653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.904689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.904828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.904857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.905022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.905048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.905179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.905205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.905373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.905399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.905494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.905520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.905631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.905657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.905755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.905779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.905890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.905917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.906024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.906056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.409 [2024-07-24 20:52:31.906188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.409 [2024-07-24 20:52:31.906226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.409 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.906354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.906380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.906544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.906569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.906680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.906705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.906818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.906843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.906982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.907012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.907145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.907170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.907308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.907334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.907445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.907471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.907600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.907625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.907737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.907762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.907871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.907896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.908057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.908086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.908228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.908268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.908377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.908402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.908506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.908530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.908672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.908697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.908831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.908855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.691 [2024-07-24 20:52:31.908962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.691 [2024-07-24 20:52:31.909002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.691 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.909138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.909164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.909302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.909327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.909429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.909454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.909559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.909584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.909720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.909747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.909855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.909880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.909992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.910018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.910157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.910199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.910365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.910391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.910522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.910547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.910662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.910688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.910798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.910824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.910926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.910951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.911064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.911089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.911225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.911260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.911421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.911447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.911554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.911597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.911700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.911728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.911858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.911883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.911987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.912012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.912153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.912180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.912325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.912351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.912455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.912480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.912613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.912638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.912745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.912770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.912891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.912916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.913046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.913071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.913186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.913214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.913334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.913359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.913476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.913501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.913635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.913660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.913774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.913799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.913899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.913925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.914052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.914077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.914177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.914202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.914311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.914336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.914463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.914489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.914601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.692 [2024-07-24 20:52:31.914626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.692 qpair failed and we were unable to recover it. 00:24:36.692 [2024-07-24 20:52:31.914731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.914756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.914855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.914881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.914988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.915013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.915148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.915176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.915302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.915328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.915461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.915486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.915628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.915654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.915751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.915776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.915907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.915932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.916030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.916055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.916159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.916184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.916313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.916338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.916438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.916463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.916589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.916615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.916750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.916775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.916874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.916899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.917022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.917052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.917194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.917219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.917323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.917349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.917475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.917500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.917600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.917626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.917751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.917776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.917914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.917939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.918065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.918090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.918259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.918285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.918382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.918407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.918576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.918601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.918698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.918723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.918827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.918852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.918991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.919017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.919137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680230 is same with the state(5) to be set 00:24:36.693 [2024-07-24 20:52:31.919297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.919336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.919482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.919510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.919654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.919680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.919822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.919849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.919981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.920007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.920145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.920171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.920312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.920339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.693 qpair failed and we were unable to recover it. 00:24:36.693 [2024-07-24 20:52:31.920450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.693 [2024-07-24 20:52:31.920475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.920605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.920630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.920764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.920789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.920892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.920917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.921045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.921069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.921174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.921199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.921326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.921352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.921466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.921491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.921625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.921650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.921770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.921795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.921908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.921933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.922055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.922094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.922207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.922234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.922415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.922442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.922550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.922575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.922686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.922711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.922861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.922886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.922989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.923015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.923126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.923152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.923288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.923320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.923454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.923479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.923613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.923638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.923741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.923766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.923866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.923892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.924024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.924049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.924179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.924204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.924347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.924375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.924509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.924534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.924639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.924665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.924760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.924786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.924922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.924949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.925062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.925088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.925194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.925219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.925374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.925400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.925534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.925560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.925722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.925747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.925854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.925878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.925984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.926009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.694 qpair failed and we were unable to recover it. 00:24:36.694 [2024-07-24 20:52:31.926112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.694 [2024-07-24 20:52:31.926137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.926253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.926280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.926436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.926461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.926594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.926620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.926730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.926755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.926857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.926882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.927011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.927036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.927138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.927163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.927272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.927298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.927441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.927467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.927588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.927614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.927745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.927771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.927939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.927964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.928075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.928099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.928218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.928249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.928360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.928385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.928488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.928528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.928687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.928712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.928875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.928900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.929004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.929029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.929143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.929170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.929338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.929364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.929492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.929521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.929648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.929672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.929775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.929800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.929925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.929954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.930105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.930130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.930234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.930265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.930397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.930422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.930531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.930557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.930687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.930712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.930817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.930842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.931009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.931034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.931136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.931161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.931296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.931321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.931426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.931451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.931565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.931591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.931690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.931715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.695 [2024-07-24 20:52:31.931857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.695 [2024-07-24 20:52:31.931882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.695 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.931994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.932019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.932123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.932148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.932284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.932309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.932470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.932495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.932626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.932654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.932785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.932810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.932924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.932949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.933050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.933076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.933206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.933232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.933357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.933382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.933480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.933510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.933623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.933648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.933746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.933771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.933901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.933926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.934085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.934109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.934294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.934323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.934465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.934492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.934668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.934692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.934809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.934837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.934984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.935012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.935183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.935208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.935315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.935341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.935455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.935481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.935607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.935632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.935766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.935791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.935945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.935972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.936122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.936149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.936322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.936348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.936465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.696 [2024-07-24 20:52:31.936490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.696 qpair failed and we were unable to recover it. 00:24:36.696 [2024-07-24 20:52:31.936627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.936652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.936797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.936822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.936918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.936942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.937058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.937083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.937201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.937227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.937389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.937418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.937543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.937568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.937682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.937707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.937816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.937842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.937959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.937985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.938097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.938122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.938282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.938307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.938447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.938472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.938610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.938634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.938744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.938770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.938908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.938933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.939049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.939074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.939182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.939207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.939317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.939342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.939508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.939533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.939635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.939660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.939789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.939814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.939944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.939990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.940109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.940137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.940275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.940301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.940438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.940463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.940599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.940624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.940759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.940783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.940922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.940948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.941086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.941111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.941270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.941296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.941441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.941487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.941603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.941630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.941781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.941805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.941949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.941975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.942136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.942160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.942262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.942288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.942428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.942453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.697 qpair failed and we were unable to recover it. 00:24:36.697 [2024-07-24 20:52:31.942585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.697 [2024-07-24 20:52:31.942610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.942718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.942743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.942903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.942928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.943067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.943092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.943207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.943233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.943365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.943390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.943527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.943552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.943665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.943691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.943806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.943831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.943955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.943983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.944133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.944159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.944308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.944338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.944469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.944493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.944658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.944683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.944789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.944814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.944949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.944992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.945171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.945195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.945297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.945322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.945461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.945486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.945591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.945616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.945724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.945748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.945895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.945920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.946020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.946045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.946173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.946198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.946391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.946429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.946552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.946579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.946689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.946716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.946826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.946852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.946997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.947022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.947126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.947151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.947290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.947317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.947480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.947506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.947615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.947641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.947774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.947799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.947935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.947960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.948070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.948095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.948228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.948258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.948395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.948419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.698 [2024-07-24 20:52:31.948527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.698 [2024-07-24 20:52:31.948552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.698 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.948697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.948722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.948854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.948879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.949009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.949051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.949206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.949233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.949392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.949416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.949559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.949584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.949680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.949705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.949817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.949842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.949956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.949982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.950112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.950137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.950278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.950303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.950467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.950492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.950607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.950631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.950741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.950766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.950878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.950902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.951067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.951091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.951195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.951219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.951393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.951418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.951555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.951580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.951721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.951746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.951877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.951902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.952033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.952058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.952157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.952182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.952288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.952313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.952447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.952473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.952618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.952643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.952749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.952773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.952917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.952942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.953057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.953082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.953183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.953209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.953363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.953389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.699 [2024-07-24 20:52:31.953528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.699 [2024-07-24 20:52:31.953553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.699 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.953677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.953703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.953815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.953840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.953969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.953994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.954103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.954129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.954237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.954270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.954378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.954403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.954536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.954561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.954658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.954683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.954819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.954849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.954984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.955009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.955126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.955164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.955305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.955333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.955432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.955458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.955584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.955610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.955712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.955739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.955909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.955935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.956050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.956076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.956185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.956210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.956332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.956357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.956463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.956488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.956601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.956626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.956783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.956808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.956921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.956947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.957044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.957068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.957184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.957209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.957336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.957362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.957471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.957496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.957605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.957630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.957750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.957778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.957903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.957930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.958041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.958066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.958204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.958230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.958396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.958422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.700 qpair failed and we were unable to recover it. 00:24:36.700 [2024-07-24 20:52:31.958533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.700 [2024-07-24 20:52:31.958557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.958694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.958719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.958880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.958909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.959039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.959064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.959208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.959253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.959366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.959393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.959538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.959564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.959675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.959701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.959835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.959861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.960023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.960048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.960140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.960165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.960297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.960324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.960424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.960449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.960592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.960617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.960748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.960773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.960924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.960950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.961055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.961080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.961211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.961236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.961384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.961409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.961545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.961570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.961732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.961758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.961885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.961911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.962014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.962039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.962148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.962173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.962290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.701 [2024-07-24 20:52:31.962315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.701 qpair failed and we were unable to recover it. 00:24:36.701 [2024-07-24 20:52:31.962475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.962500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.962603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.962628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.962771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.962796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.962940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.962966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.963097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.963126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.963285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.963311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.963444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.963470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.963605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.963630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.963765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.963791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.963932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.963958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.964090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.964115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.964254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.964280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.964394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.964419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.964563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.964589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.964731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.964756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.964864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.964890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.965023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.965048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.965181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.965207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.965368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.965394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.965560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.965585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.965714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.965740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.965846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.965872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.966056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.966083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.966191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.966216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.966360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.966387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.966546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.966571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.966680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.966705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.966871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.966896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.966999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.967025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.967136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.967161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.967295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.967321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.967460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.967486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.967622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.967647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.967803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.967828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.967929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.967954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.968092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.968117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.968278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.968317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.968425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.702 [2024-07-24 20:52:31.968452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.702 qpair failed and we were unable to recover it. 00:24:36.702 [2024-07-24 20:52:31.968567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.968592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.968735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.968760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.968907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.968932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.969071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.969096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.969204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.969231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.969375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.969400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.969541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.969573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.969680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.969705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.969857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.969883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.970046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.970071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.970218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.970264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.970414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.970441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.970549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.970574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.970682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.970707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.970815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.970840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.970953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.970978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.971111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.971136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.971251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.971277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.971387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.971412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.971521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.971546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.971646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.971671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.971807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.971832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.971964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.971989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.972087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.972113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.972218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.972249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.972386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.972411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.972526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.972552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.972690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.972716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.972824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.972850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.972976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.973001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.973142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.973167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.973281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.973307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.973418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.973444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.973615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.973644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.973786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.973811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.973949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.973974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.974077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.974102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.703 [2024-07-24 20:52:31.974254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.703 [2024-07-24 20:52:31.974281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.703 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.974415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.974440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.974600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.974624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.974726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.974751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.974893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.974918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.975048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.975072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.975185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.975210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.975353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.975379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.975489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.975514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.975642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.975667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.975831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.975856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.975986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.976011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.976148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.976173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.976314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.976339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.976450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.976475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.976580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.976606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.976744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.976769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.976877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.976902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.977063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.977088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.977194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.977219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.977337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.977362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.977491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.977516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.977672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.977697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.977832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.977857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.977975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.978000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.978135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.978160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.978269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.978295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.978402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.978427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.978533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.978558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.978688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.978713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.978847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.978872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.978979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.979006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.979115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.979141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.979255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.979283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.979413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.979438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.979583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.979608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.979744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.979769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.979871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.979912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.980014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.704 [2024-07-24 20:52:31.980039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.704 qpair failed and we were unable to recover it. 00:24:36.704 [2024-07-24 20:52:31.980207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.980232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.980344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.980369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.980468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.980492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.980627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.980655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.980812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.980837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.980940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.980965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.981097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.981122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.981255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.981280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.981396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.981421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.981533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.981558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.981703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.981728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.981857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.981883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.981995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.982020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.982180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.982205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.982343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.982370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.982531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.982556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.982693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.982719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.982850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.982875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.982990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.983014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.983154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.983178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.983287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.983314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.983448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.983473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.983574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.983599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.983728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.983752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.983860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.983886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.984016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.984045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.984152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.984176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.984304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.984330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.984431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.984457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.984585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.984610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.984764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.984792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.984945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.984971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.985100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.705 [2024-07-24 20:52:31.985125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.705 qpair failed and we were unable to recover it. 00:24:36.705 [2024-07-24 20:52:31.985264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.985290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.985446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.985471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.985581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.985606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.985737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.985762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.985902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.985927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.986031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.986056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.986173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.986198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.986303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.986329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.986448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.986473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.986640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.986665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.986796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.986821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.986958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.986983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.987135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.987162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.987312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.987337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.987441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.987466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.987566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.987591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.987702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.987729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.987842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.987868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.987999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.988024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.988124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.988149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.988269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.988295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.988434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.988459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.988620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.988645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.988790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.988815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.988929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.988954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.989054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.989079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.989214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.989238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.989366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.989392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.989505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.989529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.989633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.989658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.989795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.989820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.989928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.989953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.990061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.990085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.990233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.990277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.990376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.990401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.990559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.990604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.990708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.990736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.990902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.990927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.706 qpair failed and we were unable to recover it. 00:24:36.706 [2024-07-24 20:52:31.991069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.706 [2024-07-24 20:52:31.991094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.991254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.991280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.991426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.991453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.991613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.991638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.991807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.991835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.991971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.991996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.992101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.992125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.992238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.992268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.992392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.992417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.992567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.992592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.992702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.992726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.992862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.992887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.992984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.993009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.993164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.993192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.993345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.993371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.993468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.993493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.993657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.993682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.993827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.993852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.993953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.993979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.994137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.994165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.994349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.994375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.994477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.994503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.994610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.994635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.994739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.994764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.994898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.994923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.995053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.995078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.995205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.995230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.995343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.995368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.995504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.995529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.995635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.995660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.995797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.995822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.995959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.995984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.996105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.996130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.996235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.996266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.996382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.996407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.996545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.996571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.996712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.996737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.996866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.996891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.707 [2024-07-24 20:52:31.997053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.707 [2024-07-24 20:52:31.997078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.707 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.997181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.997205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.997322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.997348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.997487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.997512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.997620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.997645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.997800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.997826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.997964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.997990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.998121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.998146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.998256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.998281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.998397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.998422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.998549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.998574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.998678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.998703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.998849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.998874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.999008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.999033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.999190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.999215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.999333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.999360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.999494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.999534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.999663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.999687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.999820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.999844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:31.999960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:31.999984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.000088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.000113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.000256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.000281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.000375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.000398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.000503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.000526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.000628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.000651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.000786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.000814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.000927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.000951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.001093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.001117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.001227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.001256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.001398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.001422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.001517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.001541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.001671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.001695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.001795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.001819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.001953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.001977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.002103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.002127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.002239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.002283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.002414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.002439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.002549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.002575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.002674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.708 [2024-07-24 20:52:32.002699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.708 qpair failed and we were unable to recover it. 00:24:36.708 [2024-07-24 20:52:32.002828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.002853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.002958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.002983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.003125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.003150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.003289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.003318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.003479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.003504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.003613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.003638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.003746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.003771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.003880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.003905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.004032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.004057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.004183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.004208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.004309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.004335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.004439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.004465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.004584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.004610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.004740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.004764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.004874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.004899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.005032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.005057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.005215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.005239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.005346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.005372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.005529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.005554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.005714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.005739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.005872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.005898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.006034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.006059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.006226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.006264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.006405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.006429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.006565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.006590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.006720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.006745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.006877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.006902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.007041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.007071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.007200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.007225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.007335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.007360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.007494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.007519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.007645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.007670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.007812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.007841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.008000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.008025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.008182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.008207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.008315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.008340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.008439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.008464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.008606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.709 [2024-07-24 20:52:32.008631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.709 qpair failed and we were unable to recover it. 00:24:36.709 [2024-07-24 20:52:32.008745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.008770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.008876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.008901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.009016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.009041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.009181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.009206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.009329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.009355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.009487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.009512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.009646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.009671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.009798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.009823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.009957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.009982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.010088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.010113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.010271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.010296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.010394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.010420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.010521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.010546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.010678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.010703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.010809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.010834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.010936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.010961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.011099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.011129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.011291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.011316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.011424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.011448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.011552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.011577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.011734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.011759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.011922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.011947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.012044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.012069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.012206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.012231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.012380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.012405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.012565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.012590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.012689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.012715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.012826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.012851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.012983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.013008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.013151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.013176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.013308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.013333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.013465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.013490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.013623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.710 [2024-07-24 20:52:32.013648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.710 qpair failed and we were unable to recover it. 00:24:36.710 [2024-07-24 20:52:32.013779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.013804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.013909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.013935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.014047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.014072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.014204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.014229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.014411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.014438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.014543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.014585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.014735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.014762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.014911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.014937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.015060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.015085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.015220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.015252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.015393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.015418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.015527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.015552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.015687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.015712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.015819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.015844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.015979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.016004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.016107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.016132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.016268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.016294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.016450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.016475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.016590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.016618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.016774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.016800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.016947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.016972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.017106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.017131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.017294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.017320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.017426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.017451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.017619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.017651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.017781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.017807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.017972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.017997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.018186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.018210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.018345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.018371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.018479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.018504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.018633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.018658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.018784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.018809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.018938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.018979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.019152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.019180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.019297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.019323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.019460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.019485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.019656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.019684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.019858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.019883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.711 [2024-07-24 20:52:32.020038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.711 [2024-07-24 20:52:32.020065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.711 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.020252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.020281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.020434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.020459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.020639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.020667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.020819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.020847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.020993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.021018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.021157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.021182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.021309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.021334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.021440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.021466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.021608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.021633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.021744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.021769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.021903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.021927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.022061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.022086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.022210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.022256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.022421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.022446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.022607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.022632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.022733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.022758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.022888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.022913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.023048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.023073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.023179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.023204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.023336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.023362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.023475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.023500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.023667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.023692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.023824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.023850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.023976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.024001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.024137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.024162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.024265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.024291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.024425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.024450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.024616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.024644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.024803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.024829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.024933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.024959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.025138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.025163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.025260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.025285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.025410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.025435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.025566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.025591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.025726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.025751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.025855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.025880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.026054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.026083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.026204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.712 [2024-07-24 20:52:32.026230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.712 qpair failed and we were unable to recover it. 00:24:36.712 [2024-07-24 20:52:32.026378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.026403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.026521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.026561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.026693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.026718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.026826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.026851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.026992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.027020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.027168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.027193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.027338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.027380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.027539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.027568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.027747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.027774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.027927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.027955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.028102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.028130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.028288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.028315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.028479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.028505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.028648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.028689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.028875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.028900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.029014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.029044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.029181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.029206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.029335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.029361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.029466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.029491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.029659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.029687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.029809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.029834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.029971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.029996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.030167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.030192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.030343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.030369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.030474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.030499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.030637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.030795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.030821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.030954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.030980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.031106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.031134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.031320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.031346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.031481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.031509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.031630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.031658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.031783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.031808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.031915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.031948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.032088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.032112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.032214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.032240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.032352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.032378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.032516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.032541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.713 qpair failed and we were unable to recover it. 00:24:36.713 [2024-07-24 20:52:32.032676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.713 [2024-07-24 20:52:32.032703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.032823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.032866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.032976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.033005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.033145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.033171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.033331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.033361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.033496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.033521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.033710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.033735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.033842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.033867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.034038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.034063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.034238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.034269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.034402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.034447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.034565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.034594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.034748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.034774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.034902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.034927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.035036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.035062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.035215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.035240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.035403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.035432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.035538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.035566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.035708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.035732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.035907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.035935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.036082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.036111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.036306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.036332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.036482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.036510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.036658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.036687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.036838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.036864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.037018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.037043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.037147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.037172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.037317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.037343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.037471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.037496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.037657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.037685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.037811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.037836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.037944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.037970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.038091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.038118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.038271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.038296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.038408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.038434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.038594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.038622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.038764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.038789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.038900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.038925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.714 [2024-07-24 20:52:32.039060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.714 [2024-07-24 20:52:32.039085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.714 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.039215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.039250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.039362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.039387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.039522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.039550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.039700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.039725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.039858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.039883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.040002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.040027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.040157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.040186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.040309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.040334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.040469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.040495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.040597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.040622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.040765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.040806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.040927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.040956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.041109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.041134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.041279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.041305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.041435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.041460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.041564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.041589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.041683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.041708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.041835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.041860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.041965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.041991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.042089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.042115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.042253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.042281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.042421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.042446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.042549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.042574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.042722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.042750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.042902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.042927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.043039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.043064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.043199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.043224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.043405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.043430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.043555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.043595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.043744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.043772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.043956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.043981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.044139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.044180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.044338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.715 [2024-07-24 20:52:32.044367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.715 qpair failed and we were unable to recover it. 00:24:36.715 [2024-07-24 20:52:32.044515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.044541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.044653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.044678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.044809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.044838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.044983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.045008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.045142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.045167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.045326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.045355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.045503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.045529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.045661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.045686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.045850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.045878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.046009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.046034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.046165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.046190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.046359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.046385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.046518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.046543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.046637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.046663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.046841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.046884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.047083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.047110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.047263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.047293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.047414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.047443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.047580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.047606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.047738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.047764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.047955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.047984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.048161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.048187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.048321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.048351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.048549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.048575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.048703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.048729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.048834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.048859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.048994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.049020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.049149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.049178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.049346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.049375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.049484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.049512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.049667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.049693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.049795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.049821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.049970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.049998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.050116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.050141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.050258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.050285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.050389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.050414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.050516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.050542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.050645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.050671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.716 qpair failed and we were unable to recover it. 00:24:36.716 [2024-07-24 20:52:32.050798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.716 [2024-07-24 20:52:32.050823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.050951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.050976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.051076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.051102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.051236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.051270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.051408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.051434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.051545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.051588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.051751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.051779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.051927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.051952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.052085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.052127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.052281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.052325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.052463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.052489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.052628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.052654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.052791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.052817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.052926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.052951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.053135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.053163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.053322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.053348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.053509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.053535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.053664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.053693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.053831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.053860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.054012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.054037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.054138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.054163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.054336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.054362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.054493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.054519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.054654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.054697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.054840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.054867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.055017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.055043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.055154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.055178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.055296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.055322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.055460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.055486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.055616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.055659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.055791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.055824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.055976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.056001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.056101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.056283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.056325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.056454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.056479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.056618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.056661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.056778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.056807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.056955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.056980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.717 [2024-07-24 20:52:32.057121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.717 [2024-07-24 20:52:32.057163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.717 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.057320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.057345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.057629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.057673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.057829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.057857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.057979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.058007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.058138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.058163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.058304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.058330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.058459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.058484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.058605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.058630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.058810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.058838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.058961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.058989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.059122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.059147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.059291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.059317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.059424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.059449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.059579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.059605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.059701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.059726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.059878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.059906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.060058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.060083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.060190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.060230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.060362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.060391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.060520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.060545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.060668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.060709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.060878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.060906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.061053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.061078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.061211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.061237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.061388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.061414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.061584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.061609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.061753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.061781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.061972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.062017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.062140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.062165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.062270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.062296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.062462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.062487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.062600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.062625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.062762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.062788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.062957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.062982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.063090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.063116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.063251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.063278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.063410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.063435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.063596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.718 [2024-07-24 20:52:32.063621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.718 qpair failed and we were unable to recover it. 00:24:36.718 [2024-07-24 20:52:32.063771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.063799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.063922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.063951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.064084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.064109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.064236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.064266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.064402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.064427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.064565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.064590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.064744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.064769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.064906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.064932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.065111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.065136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.065278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.065304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.065405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.065432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.065569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.065595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.065729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.065754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.065879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.065904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.066034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.066059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.066155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.066180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.066335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.066360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.066489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.066514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.066625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.066650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.066806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.066834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.066980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.067005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.067116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.067145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.067311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.067350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.067488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.067516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.067666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.067694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.067839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.067867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.068040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.068065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.068193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.068237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.068403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.068429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.068565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.068591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.068691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.068716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.068881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.068909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.069049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.719 [2024-07-24 20:52:32.069074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.719 qpair failed and we were unable to recover it. 00:24:36.719 [2024-07-24 20:52:32.069214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.069239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.069360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.069385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.069501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.069528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.069710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.069739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.069907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.069956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.070114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.070139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.070254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.070281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.070391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.070416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.070525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.070551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.070712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.070737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.070874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.070910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.071055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.071080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.071213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.071239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.071350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.071376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.071533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.071558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.071680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.071706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.071829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.071854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.071993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.072018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.072189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.072217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.072393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.072432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.072573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.072599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.072763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.072791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.072905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.072934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.073086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.073111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.073254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.073280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.073393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.073418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.073578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.073603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.073783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.073811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.074025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.074080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.074275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.074301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.074398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.074423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.074528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.074553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.074709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.074734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.074844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.074869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.075001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.075027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.075154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.075179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.075284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.075309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.075407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.720 [2024-07-24 20:52:32.075432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.720 qpair failed and we were unable to recover it. 00:24:36.720 [2024-07-24 20:52:32.075536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.075561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.075664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.075689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.075845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.075871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.075974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.075999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.076137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.076163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.076293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.076335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.076468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.076494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.076635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.076678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.076860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.076886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.077039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.077064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.077191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.077216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.077329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.077355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.077492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.077517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.077632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.077673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.077835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.077870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.078053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.078079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.078211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.078237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.078465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.078503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.078610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.078637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.078774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.078799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.078930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.078954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.079133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.079158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.079291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.079317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.079428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.079454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.079600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.079625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.079734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.079776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.079923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.079950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.080098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.080123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.080267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.080293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.080390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.080415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.080514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.080539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.080654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.080680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.080806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.080831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.080965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.080990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.081121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.081165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.081330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.081358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.081522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.081548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.081693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.721 [2024-07-24 20:52:32.081721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.721 qpair failed and we were unable to recover it. 00:24:36.721 [2024-07-24 20:52:32.081890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.081918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.082074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.082099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.082208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.082234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.082342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.082368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.082466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.082491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.082628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.082653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.082815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.082848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.082973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.082998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.083109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.083134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.083302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.083330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.083466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.083492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.083599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.083625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.083759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.083784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.083922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.083947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.084056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.084082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.084215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.084255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.084415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.084440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.084554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.084580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.084715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.084745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.084892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.084918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.085056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.085098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.085267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.085293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.085420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.085446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.085579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.085621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.085792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.085819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.085948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.085973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.086082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.086106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.086234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.086269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.086419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.086444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.086554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.086579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.086738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.086764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.086937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.086962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.087094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.087137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.087267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.722 [2024-07-24 20:52:32.087315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.722 qpair failed and we were unable to recover it. 00:24:36.722 [2024-07-24 20:52:32.087448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.087474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.087653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.087681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.087819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.087846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.087972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.087998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.088107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.088132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.088289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.088315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.088412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.088437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.088572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.088597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.088752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.088780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.088932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.088957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.089071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.089095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.089222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.089255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.089438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.089464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.089584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.089609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.089744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.089770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.089897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.089922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.090055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.090080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.090212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.090237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.090353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.090378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.090513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.090538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.090724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.090752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.090877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.090903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.091041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.091066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.091173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.091198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.091337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.091363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.091472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.091496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.091613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.091640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.091773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.091798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.091925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.091950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.092125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.092153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.092289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.092316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.092452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.092477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.092570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.092595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.092730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.092755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.092887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.092912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.093025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.093050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.093218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.093247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.093353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.093378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.723 qpair failed and we were unable to recover it. 00:24:36.723 [2024-07-24 20:52:32.093504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.723 [2024-07-24 20:52:32.093532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.093655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.093680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.093833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.093891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.094045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.094075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.094209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.094234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.094382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.094409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.094594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.094623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.094784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.094809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.094949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.094994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.095136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.095164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.095287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.095314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.095448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.095474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.095638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.095664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.095775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.095801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.095908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.095933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.096103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.096128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.096271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.096296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.096426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.096468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.096607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.096635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.096759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.096784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.096893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.096919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.097049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.097075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.097234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.097264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.097398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.097423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.097562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.097591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.097773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.097804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.097913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.097939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.098066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.098091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.098251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.098277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.098411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.098440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.098595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.098636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.098769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.098794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.098900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.724 [2024-07-24 20:52:32.098925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.724 qpair failed and we were unable to recover it. 00:24:36.724 [2024-07-24 20:52:32.099111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.099139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.099289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.099315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.099458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.099483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.099594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.099620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.099784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.099809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.099962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.099989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.100145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.100174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.100301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.100326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.100464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.100490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.100618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.100646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.100803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.100828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.100922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.100946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.101097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.101124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.101281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.101307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.101447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.101489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.101615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.101643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.101791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.101816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.101974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.101999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.102129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.102154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.102293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.102318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.102447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.102472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.102639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.102664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.102790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.102815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.102939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.102968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.103127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.103154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.103314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.103340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.103459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.103484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.103616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.103641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.103796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.103820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.103918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.103958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.104128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.104156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.104311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.104337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.104446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.104470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.725 qpair failed and we were unable to recover it. 00:24:36.725 [2024-07-24 20:52:32.104590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.725 [2024-07-24 20:52:32.104618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.104802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.104828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.104934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.104958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.105081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.105122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.105266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.105292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.105446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.105473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.105610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.105638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.105784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.105809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.105941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.105966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.106160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.106185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.106279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.106305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.106406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.106431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.106581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.106609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.106732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.106757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.106915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.106940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.107089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.107117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.107263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.107289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.107433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.107475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.107601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.107628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.107800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.107825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.107972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.108000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.108135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.108163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.108289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.108315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.108432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.108457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.108618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.108646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.108791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.108815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.108944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.108969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.109079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.109104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.109262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.109288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.109431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.109459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.109600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.109627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.109775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.109806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.109917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.109942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.110040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.110064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.110222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.110255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.110414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.110439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.110589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.110616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.110795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.110819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.726 [2024-07-24 20:52:32.110957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.726 [2024-07-24 20:52:32.110982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.726 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.111108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.111133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.111260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.111286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.111393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.111418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.111539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.111567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.111720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.111745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.111850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.111875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.112009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.112037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.112191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.112217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.112362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.112387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.112501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.112526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.112692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.112717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.112871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.112899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.113019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.113047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.113199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.113224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.113339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.113364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.113543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.113571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.113724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.113749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.113848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.113873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.114008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.114036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.114189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.114213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.114360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.114405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.114534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.114562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.114709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.114734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.114847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.114873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.115033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.115058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.115226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.115257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.115362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.115387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.115577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.115605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.115752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.115778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.115912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.115937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.116063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.116088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.116268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.116304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.116422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.116447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.116597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.116625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.116811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.116837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.116942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.116968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.117097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.117125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.117281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.727 [2024-07-24 20:52:32.117307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.727 qpair failed and we were unable to recover it. 00:24:36.727 [2024-07-24 20:52:32.117432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.117472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.117620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.117648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.117758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.117783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.117895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.117919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.118028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.118052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.118183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.118208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.118342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.118368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.118531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.118560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.118711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.118737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.118847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.118872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.118986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.119011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.119122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.119148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.119277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.119319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.119464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.119492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.119644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.119669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.119799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.119824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.119934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.119960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.120096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.120121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.120252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.120277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.120415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.120441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.120566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.120591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.120695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.120720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.120886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.120915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.121025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.121050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.121228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.121262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.121400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.121428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.121573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.121598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.121700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.121725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.121861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.121885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.121995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.122020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.122162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.122187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.122324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.122349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.122501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.122526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.122681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.122706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.122869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.122894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.123005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.123030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.123135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.123160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.123264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.123289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.728 [2024-07-24 20:52:32.123395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.728 [2024-07-24 20:52:32.123420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.728 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.123544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.123569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.123677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.123702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.123836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.123862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.123960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.123985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.124128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.124153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.124290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.124315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.124447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.124471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.124628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.124656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.124810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.124834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.124971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.124996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.125122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.125146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.125278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.125304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.125424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.125449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.125566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.125591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.125725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.125752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.125879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.125904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.126963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.126988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.127158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.127184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.127317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.127342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.127483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.127508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.127675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.127717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.127831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.127859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.127981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.128006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.128141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.128166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.128283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.729 [2024-07-24 20:52:32.128308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.729 qpair failed and we were unable to recover it. 00:24:36.729 [2024-07-24 20:52:32.128417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.128441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.128574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.128599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.128744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.128772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.128898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.128923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.129026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.129051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.129159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.129183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.129299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.129325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.129456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.129481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.129624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.129649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.129778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.129803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.129910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.129935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.130064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.130089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.130189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.130214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.130356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.130381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.130517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.130542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.130663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.130688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.130805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.130831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.130965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.130990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.131120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.131145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.131262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.131291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.131420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.131445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.131604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.131628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.131759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.131784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.131894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.131919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.132053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.132082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.132231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.132266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.132394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.132419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.132597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.132624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.132747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.132775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.132895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.132923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.133106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.133133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.133296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.133322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.133481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.133506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.133639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.133664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.133765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.133790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.730 [2024-07-24 20:52:32.133891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.730 [2024-07-24 20:52:32.133916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.730 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.134094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.134119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.134217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.134247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.134375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.134399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.134528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.134553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.134657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.134682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.134797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.134822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.134930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.134955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.135087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.135113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.135216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.135260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.135398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.135423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.135578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.135606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.135769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.135794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.135909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.135934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.136090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.136114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.136212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.136238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.136357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.136382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.136511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.136536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.136650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.136675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.136830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.136855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.136993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.137018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.137141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.137170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.137307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.137333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.137460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.137485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.137619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.137644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.137753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.137782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.137896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.137921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.138045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.138174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.138317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.138475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.138607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.138731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.138894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.138994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.139020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.139157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.139183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.139321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.139347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.139480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.139506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.139618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.731 [2024-07-24 20:52:32.139643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.731 qpair failed and we were unable to recover it. 00:24:36.731 [2024-07-24 20:52:32.139807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.139834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.139956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.139981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.140107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.140132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.140265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.140290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.140409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.140434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.140531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.140556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.140688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.140713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.140848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.140873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.141003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.141028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.141174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.141202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.141338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.141363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.141513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.141538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.141667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.141692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.141840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.141871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.141983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.142008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.142186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.142211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.142383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.142409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.142561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.142589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.142734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.142762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.142916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.142941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.143068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.143094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.143207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.143232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.143361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.143386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.143494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.143520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.143627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.143652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.143802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.143827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.143983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.144008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.144152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.144178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.144290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.144316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.144427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.144452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.144555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.144580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.144695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.144720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.144829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.144856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.144987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.145012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.145156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.145181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.145314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.145339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.145477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.145502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.732 qpair failed and we were unable to recover it. 00:24:36.732 [2024-07-24 20:52:32.145605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.732 [2024-07-24 20:52:32.145630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.145760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.145785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.145948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.145973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.146076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.146102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.146224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.146256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.146388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.146414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.146526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.146551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.146659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.146684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.146790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.146815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.146949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.146974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.147075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.147100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.147270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.147296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.147411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.147435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.147543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.147568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.147700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.147725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.147828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.147853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.147955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.147980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.148086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.148116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.148225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.148256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.148356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.148382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.148547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.148571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.148670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.148695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.148798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.148823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.148933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.148959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.149088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.149113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.149251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.149276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.149381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.149406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.149515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.149540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.149696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.149721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.149834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.149859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.149962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.149987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.150153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.150178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.150290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.150315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.150444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.150469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.150570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.733 [2024-07-24 20:52:32.150594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.733 qpair failed and we were unable to recover it. 00:24:36.733 [2024-07-24 20:52:32.150723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.150748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.150879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.150904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.151032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.151057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.151172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.151198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.151342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.151368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.151485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.151510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.151646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.151670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.151801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.151826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.151938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.151964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.152073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.152103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.152233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.152263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.152378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.152403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.152532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.152563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.152675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.152699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.152796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.152822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.152962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.152987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.153124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.153149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.153254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.153279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.153389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.153414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.153510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.153535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.153642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.153668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.153827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.153852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.153982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.154007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.154168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.154193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.154291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.154316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.154450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.154475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.154619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.154644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.154749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.154774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.154883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.154908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.155011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.155036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.155136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.155162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.155297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.155323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.155428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.155455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.155589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.155615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.155755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.155780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.155890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.155915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.156024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.156049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.156218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.156249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.156397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.156422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.156554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.156579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.156684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.734 [2024-07-24 20:52:32.156709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.734 qpair failed and we were unable to recover it. 00:24:36.734 [2024-07-24 20:52:32.156843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.156868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.157000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.157025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.157159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.157184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.157319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.157345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.157456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.157481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.157628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.157653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.157776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.157801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.157933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.157958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.158083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.158108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.158270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.158305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.158440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.158465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.158564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.158589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.158754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.158779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.158886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.158911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.159011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.159036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.159135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.159160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.159300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.159326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.159456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.159481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.159598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.159623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.159730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.159755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.159883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.159908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.160034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.160060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.160186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.160211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.160357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.160383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.160500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.160525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.160639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.160664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.160797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.160822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.160962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.160987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.161122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.161147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.161273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.161299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.161404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.161429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.161566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.161591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.161722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.161747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.161905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.161930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.162063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.162088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.162219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.162250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.162357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.162387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.162522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.162548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.162676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.162701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.162830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.162855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.162991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.163016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.163118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.163143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.163274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.163300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.163418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.163443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.735 qpair failed and we were unable to recover it. 00:24:36.735 [2024-07-24 20:52:32.163579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.735 [2024-07-24 20:52:32.163604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.163761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.163786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.163918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.163943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.164079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.164104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.164230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.164261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.164367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.164393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.164525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.164550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.164652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.164677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.164824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.164850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.164957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.164982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.165113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.165138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.165247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.165273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.165404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.165429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.165530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.165555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.165714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.165740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.165845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.165870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.166004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.166029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.166175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.166200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.166307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.166333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.166468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.166493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.166635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.166660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.166788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.166813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.166945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.166971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.167079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.167104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.167237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.167269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.167402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.167428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.167523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.167548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.167649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.167674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.167780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.167806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.167952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.167980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.168111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.168136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.168234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.168264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.168392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.168422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.168550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.168585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.168734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.168763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.168883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.168912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.169086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.169111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.169269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.169310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.169466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.169491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.169614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.169639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.169774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.169815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.169957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.169985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.170110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.170135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.170269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.170295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.170448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.170476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.736 qpair failed and we were unable to recover it. 00:24:36.736 [2024-07-24 20:52:32.170607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.736 [2024-07-24 20:52:32.170633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.170761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.170786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.170929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.170957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.171120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.171146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.171271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.171298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.171474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.171502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.171643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.171668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.171797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.171838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.172011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.172039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.172187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.172212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.172369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.172412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.172559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.172587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.172739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.172764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.172926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.172951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.173082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.173110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.173262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.173289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.173404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.173445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.173586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.173614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.173771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.173798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.173913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.173938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.174068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.174093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.174255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.174298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.174433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.174458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.174572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.174597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.174704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.174729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.174834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.174859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.174986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.175011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.175109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.175134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.175270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.175296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.175472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.175514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.175644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.175672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.175810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.175837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.175974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.176003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.176188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.176214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.176333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.176359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.176489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.176530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.176686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.176711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.176864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.176893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.177019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.737 [2024-07-24 20:52:32.177047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.737 qpair failed and we were unable to recover it. 00:24:36.737 [2024-07-24 20:52:32.177180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.177206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.177323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.177349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.177483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.177508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.177647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.177678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.177809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.177834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.177967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.177992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.178122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.178147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.178278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.178305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.178431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.178456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.178550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.178575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.178677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.178702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.178823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.178851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.179006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.179032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.179164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.179189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.179297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.179323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.179457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.179484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.179638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.179667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.179843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.179872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.180006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.180032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.180141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.180167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.180364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.180403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.180521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.180548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.180681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.180707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.180843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.180868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.181000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.181026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.181154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.181196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.181366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.181392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.181554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.181579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.738 [2024-07-24 20:52:32.181735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.738 [2024-07-24 20:52:32.181765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.738 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.181882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.181910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.182094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.182124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.182228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.182406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.182432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.182568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.182593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.182720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.182746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.182901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.182929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.183053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.183078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.183211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.183236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.183394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.183419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.183523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.183548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.183671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.183696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.183811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.183839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.183970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.183995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.184115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.184140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.184272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.184298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.184429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.184454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.184584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.184609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.184713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.184739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.184882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.184908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.185065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.185090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.185204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.185250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.185414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.185453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.185599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.185627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.185741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.185770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.185978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.186022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.186156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.186182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.186300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.186328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.186429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.186460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.186600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.186626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.186778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.186822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.186975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.187018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.187155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.187181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.187359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.187405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.187552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.187595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.187738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.187781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.739 [2024-07-24 20:52:32.187909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.739 [2024-07-24 20:52:32.187953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.739 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.188071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.188096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.188266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.188309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.188456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.188500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.188631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.188675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.188848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.188892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.189030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.189056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.189191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.189218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.189347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.189395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.189519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.189562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.189707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.189750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.189885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.189911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.190067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.190092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.190226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.190260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.190389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.190434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.190627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.190669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.190817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.190845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.190964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.190988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.191109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.191134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.191285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.191323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.191438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.191464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.191640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.191668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.191783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.191809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.191919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.191944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.192085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.192110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.192210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.192235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.192373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.192398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.192547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.192575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.192703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.192749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.192873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.192900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.193022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.193052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.193205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.193230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.193341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.193367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.193476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.193502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.193628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.193656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.193787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.193812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.193965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.193992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.194110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.740 [2024-07-24 20:52:32.194135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.740 qpair failed and we were unable to recover it. 00:24:36.740 [2024-07-24 20:52:32.194272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.194298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.194465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.194490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.194677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.194705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.194849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.194877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.195000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.195029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.195199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.195238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.195393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.195420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.195528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.195554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.195716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.195767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.195897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.195940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.196049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.196074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.196189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.196214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.196355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.196382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.196486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.196511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.196642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.196667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.196799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.196824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.196962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.196989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.197137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.197164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.197302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.197328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.197449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.197474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.197635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.197660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.197810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.197839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.197971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.197999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.198177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.198202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.198347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.198373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.198469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.198494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.198679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.198707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.198819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.198848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.198975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.199004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.199152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.199190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.199363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.199391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.199508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.199537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.199723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.199750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.199897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.199941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.200079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.200105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.200218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.200250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.741 [2024-07-24 20:52:32.200386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.741 [2024-07-24 20:52:32.200413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.741 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.200520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.200546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.200703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.200728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.200841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.200866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.201030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.201055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.201150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.201175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.201283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.201308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.201450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.201475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.201581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.201605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.201710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.201734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.201900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.201925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.202047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.202075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.202212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.202239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.202430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.202456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.202589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.202614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.202717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.202742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.202897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.202924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.203129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.203157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.203291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.203316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.203422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.203447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.203589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.203614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.203738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.203766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.203911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.203939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.204047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.204075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.204237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.204267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.204407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.204445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.204579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.204614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.204762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.204790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.204927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.204956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.205163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.205220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.205393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.205420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.205532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.742 [2024-07-24 20:52:32.205575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.742 qpair failed and we were unable to recover it. 00:24:36.742 [2024-07-24 20:52:32.205714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.205761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.205925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.205951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.206067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.206093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.206224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.206254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.206361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.206403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.206533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.206558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.206659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.206684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.206790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.206815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.206932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.206957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.207085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.207110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.207217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.207247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.207377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.207406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.207539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.207567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.207685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.207713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.207828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.207856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.208029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.208057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.208165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.208193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.208346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.208372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.208504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.208544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.208660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.208688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.208851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.208879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.209024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.209056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.209198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.209226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.209414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.209439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.209557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.209585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.209756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.209784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.209901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.209943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.210060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.210088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.210221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.210253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.210406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.210430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.210593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.210621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.210769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.210797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.210924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.210965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.211113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.211141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.211295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.211321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.211477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.211516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.211656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.211701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.743 [2024-07-24 20:52:32.211824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.743 [2024-07-24 20:52:32.211867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.743 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.212014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.212042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.212191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.212217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.212350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.212377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.212483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.212509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.212611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.212637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.212791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.212837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.213033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.213059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.213164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.213190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.213303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.213328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.213449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.213477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.213632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.213662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.213831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.213859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.214022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.214047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.214184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.214209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.214358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.214384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.214530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.214558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.214744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.214771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.214888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.214915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.215029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.215057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.215251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.215310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.215450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.215477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.215662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.215690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.215865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.215913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.216063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.216094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.216231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.216264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.216425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.216451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.216596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.216623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.216789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.216817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.216970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.216998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.217158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.217183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.217298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.217324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.217483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.217524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.217671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.217700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.217862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.217890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.218030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.218058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.218178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.218208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.744 [2024-07-24 20:52:32.218361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.744 [2024-07-24 20:52:32.218386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.744 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.218492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.218540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.218689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.218717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.218839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.218884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.219068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.219096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.219235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.219286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.219416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.219441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.219550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.219575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.219723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.219749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.219905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.219951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.220108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.220133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.220255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.220290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.220406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.220431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.220578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.220603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.220700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.220725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.220856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.220885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.221002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.221031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.221203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.221231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.221419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.221444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.221595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.221623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.221767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.221795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.221950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.221979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.222151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.222179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.222317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.222343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.222472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.222497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.222629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.222654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.222774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.222800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.222922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.222964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.223079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.223108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.223253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.223294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.223426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.223451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.223605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.223634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.223773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.223802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.224015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.224042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.224182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.224210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.224399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.224425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.224552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.224577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.224725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.224753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.745 [2024-07-24 20:52:32.224865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.745 [2024-07-24 20:52:32.224894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.745 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.225072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.225098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.225251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.225280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.225413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.225443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.225549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.225576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.225737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.225780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.225944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.225971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.226128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.226153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.226282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.226308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.226434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.226460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.226562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.226588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.226746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.226772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.226919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.226946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.227095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.227120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.227254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.227297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.227421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.227448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.227597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.227623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.227804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.227831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.227980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.228008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.228162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.228188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.228330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.228372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.228511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.228538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.228686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.228711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.228858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.228886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.229039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.229068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.229224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.229262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.229415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.229440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.229555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.229583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.229705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.229739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.229895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.229921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.230082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.230110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.230299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.230327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.230462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.230488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.230650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.230678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.230824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.230849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.230985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.231028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.231177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.231206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.231374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.231401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.746 qpair failed and we were unable to recover it. 00:24:36.746 [2024-07-24 20:52:32.231516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.746 [2024-07-24 20:52:32.231566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.231731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.231761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.231927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.231960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.232106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.232135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.232304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.232332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.232466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.232496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.232608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.232633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.232791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.232817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.232977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.233002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.233185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.233224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.233393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.233423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:36.747 qpair failed and we were unable to recover it. 00:24:36.747 [2024-07-24 20:52:32.233582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.747 [2024-07-24 20:52:32.233608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.233769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.233794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.233900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.233925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.234030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.234056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.234164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.234189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.234301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.234327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.234426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.234451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.234556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.234582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.234746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.234782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.234956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.234989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.235165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.235199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.235396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.235436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.235599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.235635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.235765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.235799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.235974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.236014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.236161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.236197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.236344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.236379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.236587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.236623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.236802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.236835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.236993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.237044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.237199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.237236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.237433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.237467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.237614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.237664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.237820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.237856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.238026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.238059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.238203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.238238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.238382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.238420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.238599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.238625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.238754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.238797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.238937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.238964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.031 qpair failed and we were unable to recover it. 00:24:37.031 [2024-07-24 20:52:32.239085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.031 [2024-07-24 20:52:32.239111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.239249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.239275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.239401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.239429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.239589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.239613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.239720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.239750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.239930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.239958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.240088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.240113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.240267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.240293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.240429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.240455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.240618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.240644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.240817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.240845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.241040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.241065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.241176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.241201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.241325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.241351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.241526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.241554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.241701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.241726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.241859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.241899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.242042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.242070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.242231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.242263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.242400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.242425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.242589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.242617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.242747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.242772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.242931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.242956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.243155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.243182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.243305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.243331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.243507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.243534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.243672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.243700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.243832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.243857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.243988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.032 [2024-07-24 20:52:32.244013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.032 qpair failed and we were unable to recover it. 00:24:37.032 [2024-07-24 20:52:32.244144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.244173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.244328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.244354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.244472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.244496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.244601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.244626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.244764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.244789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.244938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.244965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.245138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.245166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.245331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.245357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.245502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.245526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.245658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.245683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.245872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.245897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.246035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.246077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.246231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.246269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.246448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.246473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.246587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.246612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.246774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.246803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.246969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.246993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.247128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.247154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.247323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.247351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.247495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.247522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.247662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.247703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.247843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.247871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.248057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.248083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.248183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.248225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.248388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.248416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.248548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.248573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.248706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.248732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.248858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.248886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.249037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.249062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.249203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.249253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.249426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.249453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.249625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.249650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.249828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.033 [2024-07-24 20:52:32.249856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.033 qpair failed and we were unable to recover it. 00:24:37.033 [2024-07-24 20:52:32.249981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.250005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.250117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.250142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.250276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.250318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.250465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.250492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.250673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.250698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.250837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.250863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.250972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.250996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.251180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.251204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.251321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.251346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.251454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.251478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.251582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.251607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.251716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.251741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.251837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.251862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.252000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.252025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.252187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.252211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.252360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.252387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.252553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.252579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.252686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.252711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.252871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.252899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.253055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.253080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.253229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.253262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.253447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.253475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.253657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.253687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.253797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.253838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.253980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.254009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.254159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.254184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.254311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.254354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.254500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.254527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.254649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.254674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.254781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.254805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.254945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.254971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.255071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.255095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.255234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.255280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.255389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.255416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.255586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.255610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.034 [2024-07-24 20:52:32.255745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.034 [2024-07-24 20:52:32.255770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.034 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.255968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.255999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.256156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.256182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.256355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.256383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.256526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.256554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.256711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.256737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.256867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.256891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.257026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.257055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.257238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.257274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.257425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.257451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.257641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.257666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.257802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.257828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.257957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.257982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.258118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.258143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.258304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.258343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.258487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.258514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.258669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.258713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.258867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.258910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.259037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.259080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.259253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.259279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.259378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.259404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.259525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.259553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.259701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.259727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.259855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.259883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.259991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.260033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.260161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.260186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.260305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.260331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.260470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.260499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.260663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.260689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.260913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.260965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.261106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.261134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.261305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.261330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.261461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.261486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.261588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.261613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.035 qpair failed and we were unable to recover it. 00:24:37.035 [2024-07-24 20:52:32.261817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.035 [2024-07-24 20:52:32.261843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.261981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.262008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.262120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.262148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.262284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.262312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.262463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.262488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.262616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.262642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.262764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.262792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.262941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.262970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.263111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.263139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.263321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.263346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.263478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.263503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.263652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.263678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.263817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.263843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.263986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.264024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.264194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.264221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.264369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.264395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.264514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.264557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.264710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.264753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.264880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.264924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.265030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.265056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.265249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.265288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.265401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.265428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.265585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.265615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.265798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.265850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.265960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.265988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.266138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.266166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.266347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.266373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.266479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.266521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.266658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-07-24 20:52:32.266685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.036 qpair failed and we were unable to recover it. 00:24:37.036 [2024-07-24 20:52:32.266872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.266926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.267081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.267109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.267262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.267415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.267440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.267556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.267581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.267743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.267771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.267899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.267941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.268082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.268109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.268253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.268296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.268426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.268451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.268569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.268596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.268780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.268808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.268912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.268939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.269086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.269114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.269255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.269298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.269432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.269457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.269587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.269612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.269751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.269776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.269956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.269989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.270109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.270137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.270323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.270348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.270487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.270512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.270611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.270636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.270785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.270813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.270960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.270988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.271110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.271138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.271262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.037 [2024-07-24 20:52:32.271304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.037 qpair failed and we were unable to recover it. 00:24:37.037 [2024-07-24 20:52:32.271414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.271439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.271576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.271601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.271733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.271758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.271876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.271903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.272041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.272069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.272191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.272219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.272394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.272433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.272605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.272632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.272819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.272863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.273048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.273093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.273240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.273292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.273410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.273436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.273590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.273636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.273795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.273837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.274024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.274074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.274181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.274207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.274397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.274442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.274576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.274605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.038 qpair failed and we were unable to recover it. 00:24:37.038 [2024-07-24 20:52:32.274773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.038 [2024-07-24 20:52:32.274821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.275004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.275032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.275176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.275201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.275337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.275366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.275504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.275533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.275706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.275749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.275883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.275909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.276046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.276071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.276209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.276234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.276406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.276436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.276585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.276613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.276730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.276759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.276904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.276932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.277073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.277101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.277261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.277304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.277458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.277486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.277638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.277666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.277814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.277842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.278020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.278048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.278223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.278256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.278413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.278438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.278595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.278639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.278832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.278875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.279058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.279113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.279225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.279255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.279393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.279418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.039 [2024-07-24 20:52:32.279600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.039 [2024-07-24 20:52:32.279644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.039 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.279755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.279786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.279889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.279915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.280047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.280072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.280251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.280278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.280385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.280410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.280568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.280596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.280801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.280852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.281029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.281057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.281170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.281198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.281334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.281360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.281469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.281494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.281675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.281703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.281878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.281905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.282079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.282107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.282305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.282331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.282442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.282467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.282642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.282670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.282867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.282895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.283117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.283144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.283256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.283298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.283405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.283430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.283546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.283574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.283708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.283735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.283900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.283927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.284077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.284105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.284338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.284363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.040 [2024-07-24 20:52:32.284521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.040 [2024-07-24 20:52:32.284546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.040 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.284760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.284792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.284936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.284964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.285117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.285144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.285314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.285353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.285501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.285527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.285676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.285719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.285823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.285848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.286000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.286043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.286152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.286179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.286341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.286367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.286517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.286545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.286717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.286760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.286890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.286935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.287074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.287099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.287218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.287250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.287406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.287449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.287606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.287649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.287835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.287877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.288027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.288054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.288163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.288188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.288290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.288316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.288468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.288494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.288624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.288652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.288790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.288818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.288966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.288995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.289168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.289196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.041 [2024-07-24 20:52:32.289365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.041 [2024-07-24 20:52:32.289391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.041 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.289551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.289581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.289770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.289798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.289941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.289969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.290092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.290135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.290256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.290300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.290472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.290497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.290681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.290737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.290872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.290900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.291019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.291046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.291191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.291229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.291364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.291392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.291494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.291520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.291643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.291687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.291837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.291881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.292046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.292072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.292205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.292232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.292354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.292380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.292544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.292569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.292756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.292805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.292978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.293006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.293174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.293201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.293338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.293364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.293489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.293531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.293647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.293675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.293790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.293818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.293958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.293986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.042 qpair failed and we were unable to recover it. 00:24:37.042 [2024-07-24 20:52:32.294105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.042 [2024-07-24 20:52:32.294133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.294280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.294325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.294497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.294524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.294686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.294728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.294911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.294954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.295095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.295120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.295287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.295313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.295437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.295481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.295645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.295687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.295866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.295909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.296027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.296054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.296213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.296238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.296396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.296440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.296599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.296627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.296762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.296790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.296947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.296972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.297114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.297141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.297295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.297324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.297471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.297515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.297677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.297725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.297885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.297911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.298044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.298070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.298172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.298197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.298359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.298403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.298526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.298569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.298750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.298778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.298905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.043 [2024-07-24 20:52:32.298931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.043 qpair failed and we were unable to recover it. 00:24:37.043 [2024-07-24 20:52:32.299088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.299113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.299317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.299348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.299492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.299519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.299651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.299694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.299854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.299881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.300062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.300088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.300210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.300239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.300440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.300467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.300632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.300659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.300772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.300815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.301037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.301092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.301206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.301258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.301416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.301444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.301616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.301644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.301811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.301853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.301971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.301996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.302101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.302126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.302263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.302290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.302402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.302428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.302586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.302611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.302767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.302796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.302919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.302944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.303106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.303132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.303298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.303341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.303524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.303567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.303693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.303737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.303867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.303892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.304029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.304055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.044 [2024-07-24 20:52:32.304207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.044 [2024-07-24 20:52:32.304235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.044 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.304401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.304428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.304589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.304617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.304752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.304777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.304903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.304931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.305072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.305100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.305277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.305303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.305410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.305435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.305550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.305577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.305785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.305813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.305941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.305969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.306120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.306148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.306300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.306327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.306424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.306449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.306587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.306613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.306757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.306784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.306930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.306973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.307107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.307134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.307287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.307312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.307413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.307438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.307555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.307582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.307730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.307773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.307904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.307946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.308088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.308115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.308261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.308304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.308460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.308485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.308612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.308640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.308795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.308825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.308997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.309026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.309167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.309195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.309355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.309382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.309557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.309585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.309700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.309730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.309857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.309900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.310078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.310105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.310209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.310237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.310362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.310387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.045 [2024-07-24 20:52:32.310514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.045 [2024-07-24 20:52:32.310555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.045 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.310662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.310690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.310899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.310927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.311095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.311123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.311290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.311317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.311430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.311455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.311591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.311616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.311783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.311810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.311997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.312024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.312170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.312198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.312363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.312389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.312498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.312523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.312653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.312678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.312839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.312868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.313032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.313059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.313295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.313320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.313429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.313454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.313559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.313588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.313722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.313749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.313922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.313949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.314072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.314113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.314229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.314263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.314436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.314461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.314589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.314615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.314729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.314754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.314865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.314890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.315043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.315070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.315188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.315215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.315404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.315430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.315542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.315567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.315701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.315729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.315856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.315884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.316005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.316033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.316185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.316213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.316352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.316379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.316513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.316539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.316659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.316701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.316817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.046 [2024-07-24 20:52:32.316844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.046 qpair failed and we were unable to recover it. 00:24:37.046 [2024-07-24 20:52:32.317022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.317064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.317196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.317225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.317389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.317415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.317527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.317552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.317659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.317684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.317803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.317832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.318020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.318046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.318218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.318252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.318369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.318395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.318510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.318548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.318737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.318782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.318931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.318974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.319135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.319161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.319297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.319324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.319470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.319511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.319663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.319705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.319887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.319929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.320036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.320062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.320199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.320225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.320362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.320405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.320568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.320612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.320742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.320786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.320950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.320975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.321109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.321136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.321276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.321303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.321441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.321467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.321599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.321624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.321820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.321870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.322020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.047 [2024-07-24 20:52:32.322048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.047 qpair failed and we were unable to recover it. 00:24:37.047 [2024-07-24 20:52:32.322192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.322221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.322401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.322447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.322592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.322635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.322771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.322813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.322961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.323008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.323139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.323165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.323295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.323322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.323455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.323498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.323626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.323659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.323823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.323849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.323978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.324004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.324139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.324164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.324270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.324307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.324480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.324508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.324645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.324688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.324813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.324859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.324974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.325000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.325146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.325172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.325332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.325374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.325548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.325575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.325772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.325816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.325945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.325970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.326104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.326130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.326311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.326354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.326485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.326529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.326635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.326662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.326802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.326827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.327043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.327068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.327207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.327233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.327352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.327377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.327505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.048 [2024-07-24 20:52:32.327548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.048 qpair failed and we were unable to recover it. 00:24:37.048 [2024-07-24 20:52:32.327692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.327726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.327866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.327894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.328038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.328064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.328184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.328209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.328367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.328393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.328542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.328571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.328750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.328778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.328915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.328958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.329115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.329140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.329318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.329344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.329504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.329545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.329710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.329737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.329955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.330008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.330180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.330208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.330371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.330396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.330529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.330554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.330709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.330736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.330884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.330912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.331022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.331050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.331196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.331225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.331387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.331413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.331531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.331570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.331706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.331750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.331877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.331905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.332104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.332133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.332266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.332293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.332450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.332495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.332650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.332684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.049 qpair failed and we were unable to recover it. 00:24:37.049 [2024-07-24 20:52:32.332883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.049 [2024-07-24 20:52:32.332927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.333096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.333121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.333361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.333391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.333514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.333542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.333659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.333687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.333833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.333861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.334079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.334107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.334253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.334295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.334457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.334483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.334603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.334632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.334781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.334810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.334962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.334990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.335118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.335143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.335285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.335311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.335446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.335471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.335627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.335653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.335819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.335847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.336017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.336045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.336193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.336221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.336356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.336382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.336514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.336541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.336692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.050 [2024-07-24 20:52:32.336720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.050 qpair failed and we were unable to recover it. 00:24:37.050 [2024-07-24 20:52:32.336864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.336892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.337029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.337057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.337198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.337226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.337364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.337389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.337547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.337572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.337678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.337721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.337938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.337966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.338183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.338211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.338396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.338422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.338552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.338579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.338752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.338779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.338906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.338934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.339055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.339083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.339204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.339229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.339344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.339369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.339473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.339498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.339657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.339682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.339786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.339812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.339948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.339975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.340111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.340138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.340290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.340316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.340449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.340474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.340609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.340634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.340782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.340810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.340925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.340953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.341201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.341228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.341395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.341421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.341574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.341617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.341747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.341772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.341899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.341924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.342110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.342138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.051 [2024-07-24 20:52:32.342252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.051 [2024-07-24 20:52:32.342277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.051 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.342409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.342434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.342578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.342606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.342734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.342759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.342890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.342916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.343049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.343077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.343258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.343284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.343440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.343465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.343617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.343646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.343804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.343830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.344009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.344037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.344196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.344225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.344391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.344418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.344531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.344572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.344729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.344781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.344929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.344957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.345067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.345094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.345229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.345262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.345411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.345436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.345584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.345611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.345745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.345771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.345939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.345967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.346101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.346144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.346327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.346353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.346478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.346504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.346636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.346679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.346839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.346867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.347035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.347063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.347212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.347247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.347429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.347454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.347566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.347591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.347695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.347721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.052 [2024-07-24 20:52:32.347827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.052 [2024-07-24 20:52:32.347853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.052 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.348036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.348064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.348210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.348238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.348422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.348449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.348553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.348579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.348737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.348762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.348912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.348940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.349065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.349090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.349221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.349254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.349451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.349476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.349615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.349640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.349771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.349814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.349964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.349991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.350155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.350180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.350288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.350314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.350451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.350476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.350603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.350628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.350735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.350760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.350922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.350950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.351073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.351099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.351249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.351275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.351408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.351435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.351586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.351611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.351741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.351782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.351959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.351987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.352157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.352185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.352347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.352372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.352478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.352503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.352609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.352634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.352793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.352835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.352969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.352997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.353132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.353157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.353262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.353287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.053 qpair failed and we were unable to recover it. 00:24:37.053 [2024-07-24 20:52:32.353473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.053 [2024-07-24 20:52:32.353501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.353716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.353741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.353871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.353912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.354129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.354157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.354354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.354380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.354481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.354522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.354675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.354716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.354876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.354901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.355013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.355039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.355174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.355199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.355334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.355359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.355473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.355498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.355672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.355699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.355875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.355900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.356014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.356039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.356149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.356175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.356310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.356336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.356518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.356550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.356692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.356720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.356870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.356895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.356998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.357023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.357252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.357278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.357440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.357466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.357621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.357648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.357865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.357894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.358080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.358105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.358216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.358248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.358358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.358383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.358510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.358535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.358640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.358666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.358814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.358840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.359021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.359046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.359198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.359225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.359376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.359402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.359536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.359561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.359693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.359735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.359885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.359913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.360041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.360065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.054 [2024-07-24 20:52:32.360204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.054 [2024-07-24 20:52:32.360229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.054 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.360388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.360416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.360590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.360615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.360829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.360858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.361031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.361059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.361181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.361205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.361311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.361337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.361496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.361524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.361680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.361705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.361889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.361917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.362037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.362064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.362220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.362252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.362361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.362386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.362504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.362533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.362713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.362738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.362875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.362900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.363053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.363078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.363210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.363235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.363370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.363414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.363598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.363623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.363751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.363780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.363912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.363937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.364118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.364146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.364322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.364348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.364495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.364523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.364653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.364680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.364834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.364858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.364990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.365030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.365167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.365195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.365358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.365384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.055 [2024-07-24 20:52:32.365521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.055 [2024-07-24 20:52:32.365563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.055 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.365735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.365763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.365885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.365911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.366038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.366063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.366265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.366290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.366422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.366448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.366581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.366606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.366785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.366813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.366969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.366994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.367171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.367199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.367319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.367345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.367450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.367475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.367580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.367606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.367790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.367819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.367949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.367975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.368078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.368103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.368316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.368346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.368527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.368556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.368684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.368712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.368848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.368875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.368992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.369017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.369155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.369179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.369290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.369315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.369453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.369478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.369590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.369615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.369773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.369797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.369922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.369947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.370125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.370152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.370331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.370360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.370510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.370536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.370678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.370703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.370918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.370946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.371157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.371182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.371304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.371332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.371506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.371534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.371689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.371714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.371837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.371878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.056 qpair failed and we were unable to recover it. 00:24:37.056 [2024-07-24 20:52:32.371990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.056 [2024-07-24 20:52:32.372018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.372174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.372199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.372350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.372392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.372562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.372589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.372800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.372825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.372961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.372986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.373092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.373118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.373261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.373287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.373397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.373438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.373577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.373605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.373758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.373783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.373924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.373949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.374053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.374078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.374203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.374230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.374389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.374414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.374569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.374597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.374754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.374780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.374887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.374912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.375070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.375099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.375253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.375279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.375415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.375441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.375590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.375620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.375732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.375757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.375915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.375940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.376082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.376110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.376268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.376294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.376404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.376430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.376572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.376600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.376757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.376782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.376928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.376954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.377061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.377086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.377229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.377261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.377397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.377422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.377532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.377559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.377743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.377769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.377900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.378069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.057 [2024-07-24 20:52:32.378097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.057 qpair failed and we were unable to recover it. 00:24:37.057 [2024-07-24 20:52:32.378221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.378253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.378369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.378394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.378555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.378580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.378695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.378720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.378894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.378922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.379039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.379067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.379188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.379214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.379357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.379382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.379543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.379571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.379733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.379758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.379933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.379961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.380109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.380144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.380299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.380325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.380427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.380451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.380623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.380648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.380753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.380778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.380883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.380908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.381026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.381053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.381200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.381225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.381341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.381367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.381524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.381549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.381683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.381708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.381839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.381864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.381988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.382016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.382182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.382210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.382371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.382410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.382550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.382578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.382717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.382745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.382924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.382962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.383103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.383159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.383316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.383345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.383485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.058 [2024-07-24 20:52:32.383529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.058 qpair failed and we were unable to recover it. 00:24:37.058 [2024-07-24 20:52:32.383712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.383741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.383866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.383891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.384008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.384035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.384191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.384220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.384382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.384407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.384549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.384578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.384762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.384791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.384919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.384944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.385051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.385076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.385232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.385268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.385396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.385422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.385532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.385558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.385685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.385713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.385868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.385893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.385994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.386021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.386181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.386210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.386394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.386420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.386528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.386553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.386722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.386750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.386964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.386989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.387171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.387203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.387339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.387366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.387476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.387501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.387640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.387682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.387876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.387923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.388055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.388082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.388230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.388278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.388504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.388550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.388709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.388735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.388947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.388975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.389103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.389131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.389297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.389323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.389423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.389448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.389551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.389594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.389729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.389755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.389892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.389917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.390052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.390080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.390209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.390233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.059 qpair failed and we were unable to recover it. 00:24:37.059 [2024-07-24 20:52:32.390354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.059 [2024-07-24 20:52:32.390380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.390488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.390513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.390620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.390645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.390804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.390830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.390979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.391006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.391181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.391206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.391327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.391352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.391527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.391555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.391682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.391707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.391867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.391915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.392075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.392100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.392211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.392236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.392350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.392376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.392474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.392499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.392608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.392633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.392741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.392766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.392949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.392977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.393132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.393157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.393291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.393317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.393427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.393452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.393578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.393603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.393757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.393787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.394006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.394034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.394165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.394191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.394321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.394346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.394566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.394594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.394779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.394804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.395016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.395045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.395179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.395207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.395344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.395370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.395504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.395546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.395691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.060 [2024-07-24 20:52:32.395719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.060 qpair failed and we were unable to recover it. 00:24:37.060 [2024-07-24 20:52:32.395850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.395876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.396035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.396061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.396205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.396234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.396399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.396425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.396551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.396581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.396717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.396757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.396884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.396910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.397047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.397072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.397186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.397211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.397355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.397381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.397510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.397535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.397732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.397757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.397912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.397939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.398055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.398100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.398220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.398256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.398384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.398409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.398545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.398571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.398696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.398726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.398861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.398887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.398992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.399018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.399173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.399201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.399334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.399360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.399501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.399527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.399692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.399718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.399842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.399866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.399967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.399992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.400090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.400115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.400225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.400256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.400362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.400387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.400489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.400514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.400614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.400639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.400860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.400888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.401109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.401137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.401275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.401301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.401439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.401464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.401676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.401704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.401886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.401911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.402111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.061 [2024-07-24 20:52:32.402139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.061 qpair failed and we were unable to recover it. 00:24:37.061 [2024-07-24 20:52:32.402295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.402323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.402457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.402482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.402608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.402633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.402744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.402769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.402900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.402924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.403036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.403062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.403162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.403187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.403291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.403322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.403430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.403455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.403562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.403587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.403744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.403769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.403875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.403900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.404005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.404030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.404133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.404158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.404257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.404282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.404408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.404436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.404585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.404610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.404790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.404819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.404929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.404957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.405114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.405139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.405240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.405272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.405406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.405431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.405528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.405553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.405658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.405683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.405851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.405876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.406036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.406061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.406205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.406230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.406413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.406442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.406589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.406614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.406714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.406739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.406919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.406947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.407088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.407133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.407315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.407341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.407454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.407479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.407616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.407646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.407777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.407802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.407932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.407960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.408105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.062 [2024-07-24 20:52:32.408130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.062 qpair failed and we were unable to recover it. 00:24:37.062 [2024-07-24 20:52:32.408249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.408275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.408386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.408411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.408511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.408537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.408684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.408710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.408858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.408886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.409017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.409042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.409182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.409224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.409356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.409385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.409516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.409541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.409669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.409695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.409831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.409857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.409992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.410018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.410122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.410164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.410327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.410354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.410513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.410539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.410684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.410712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.410883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.410911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.411093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.411118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.411270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.411299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.411440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.411468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.411641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.411666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.411843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.411871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.412020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.412047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.412200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.412225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.412341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.412368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.412473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.412498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.412641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.412667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.412770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.412811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.412981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.413009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.413125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.063 [2024-07-24 20:52:32.413151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.063 qpair failed and we were unable to recover it. 00:24:37.063 [2024-07-24 20:52:32.413287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.413313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.413445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.413473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.413618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.413644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.413779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.413804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.413906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.413931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.414067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.414093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.414208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.414234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.414380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.414416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.414578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.414603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.414738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.414763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.414922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.414947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.415087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.415131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.415292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.415318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.415449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.415475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.415610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.415636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.415765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.415812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.415987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.416015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.416195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.416220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.416334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.416360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.416495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.416521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.416629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.416654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.416799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.416825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.416953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.416993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.417131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.417156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.417318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.417344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.417451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.417476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.417613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.417638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.417774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.417799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.417903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.417928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.418039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.418065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.418197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.418223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.418392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.418456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.418656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.064 [2024-07-24 20:52:32.418694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.064 qpair failed and we were unable to recover it. 00:24:37.064 [2024-07-24 20:52:32.418840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.418878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.419054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.419124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.419332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.419369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.419536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.419573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.419708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.419735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.419849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.419874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.420045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.420072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.420190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.420218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.420366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.420391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.420496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.420521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.420658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.420684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.420820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.420845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.420970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.420995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.421126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.421154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.421289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.421315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.421432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.421457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.421589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.421617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.421749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.421775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.421911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.421938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.422132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.422160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.422284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.422311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.422419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.422444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.422585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.422610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.422779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.422804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.422913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.422939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.423068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.423096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.065 qpair failed and we were unable to recover it. 00:24:37.065 [2024-07-24 20:52:32.423240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.065 [2024-07-24 20:52:32.423273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.423377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.423402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.423540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.423565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.423705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.423730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.423833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.423858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.423994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.424019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.424123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.424148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.424279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.424305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.424469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.424498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.424645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.424670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.424797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.424839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.424963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.424991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.425113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.425138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.425281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.425307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.425408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.425433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.425563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.425588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.425715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.425762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.425919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.425947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.426079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.426104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.426269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.426295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.426407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.426432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.426546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.426571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.426674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.426699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.426834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.426859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.426965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.426990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.066 qpair failed and we were unable to recover it. 00:24:37.066 [2024-07-24 20:52:32.427096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.066 [2024-07-24 20:52:32.427120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.427258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.427286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.427438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.427463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.427578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.427603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.427728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.427769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.427907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.427933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.428047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.428072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.428206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.428234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.428406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.428432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.428570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.428595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.428705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.428730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.428894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.428919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.429080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.429105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.429212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.429238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.429393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.429418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.429594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.429622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.429743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.429771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.429893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.429918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.430041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.430070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.430263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.430302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.430445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.430472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.430610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.430653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.430778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.430807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.430963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.431000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.431146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.431199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.431401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.431430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.431586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.431611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.431748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.431790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.431936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.431964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.432111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.432136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.432250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.432276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.432466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.432493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.432647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.432672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.432828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.432856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.433006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.067 [2024-07-24 20:52:32.433031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.067 qpair failed and we were unable to recover it. 00:24:37.067 [2024-07-24 20:52:32.433166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.433191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.433302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.433343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.433516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.433544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.433675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.433700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.433835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.433860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.433978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.434006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.434152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.434177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.434276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.434301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.434462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.434490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.434621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.434648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.434775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.434800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.434930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.434958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.435091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.435117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.435228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.435262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.435432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.435457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.435561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.435586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.435718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.435743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.435840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.435865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.435991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.436016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.436117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.436142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.436337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.436363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.436488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.436513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.436627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.436652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.436788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.436814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.436998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.437024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.437173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.437202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.437383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.437412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.437563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.437588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.437689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.437714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.437897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.437926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.438044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.438069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.438187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.438212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.438396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.438425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.438552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.438577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.438685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.438710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.438836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.438863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.438993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.439018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.439152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.439177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.439328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.439358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.439522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.068 [2024-07-24 20:52:32.439547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.068 qpair failed and we were unable to recover it. 00:24:37.068 [2024-07-24 20:52:32.439686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.439711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.439845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.439870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.439980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.440004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.440108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.440132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.440261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.440290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.440437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.440461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.440610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.440653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.440805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.440833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.440982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.441007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.441144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.441169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.441309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.441335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.441464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.441493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.441676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.441704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.441861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.441885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.441982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.442007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.442153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.442178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.442313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.442341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.442471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.442496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.442607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.442634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.442786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.442814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.442963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.442988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.443123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.443147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.443283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.443309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.443411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.443436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.443539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.443563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.443693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.443721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.443880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.443905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.444076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.444104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.444256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.444285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.444437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.444462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.444577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.444602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.444723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.444750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.444882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.444907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.445035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.445060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.445195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.445222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.069 [2024-07-24 20:52:32.445377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.069 [2024-07-24 20:52:32.445403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.069 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.445510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.445536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.445661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.445688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.445807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.445832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.445997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.446023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.446136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.446162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.446319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.446345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.446456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.446481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.446631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.446658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.446775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.446800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.446924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.446949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.447094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.447121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.447271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.447297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.447436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.447462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.447642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.447667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.447773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.447798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.447910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.447936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.448056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.448081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.448185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.448210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.448318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.448344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.448501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.448526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.448630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.448655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.448757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.448782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.448908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.448935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.449038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.449080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.449188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.449213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.070 qpair failed and we were unable to recover it. 00:24:37.070 [2024-07-24 20:52:32.449332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.070 [2024-07-24 20:52:32.449357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.449490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.449515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.449675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.449701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.449852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.449878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.449998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.450023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.450134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.450159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.450321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.450347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.450458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.450483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.450615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.450654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.450762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.450788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.450911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.450935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.451065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.451091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.451212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.451238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.451383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.451408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.451523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.451549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.451722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.451748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.451894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.451919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.452025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.452050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.452166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.452196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.452355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.452380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.452518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.452543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.452698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.452724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.452847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.452876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.453017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.453042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.453221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.453257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.453422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.453447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.453607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.453648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.453799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.453825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.453993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.454018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.454123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.454147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.071 qpair failed and we were unable to recover it. 00:24:37.071 [2024-07-24 20:52:32.454299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.071 [2024-07-24 20:52:32.454327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.454484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.454509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.454644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.454670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.454816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.454841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.454951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.454976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.455091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.455117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.455254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.455281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.455385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.455411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.455584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.455609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.455768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.455793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.455931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.455956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.456057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.456082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.456183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.456209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.456360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.456386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.456515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.456540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.456642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.456668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.456797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.456822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.456957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.456982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.457116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.457141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.457283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.457308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.457429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.457454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.457552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.457577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.457718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.457743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.457876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.457901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.458027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.458052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.458154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.458179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.458317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.458343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.458470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.458495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.458622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.072 [2024-07-24 20:52:32.458647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.072 qpair failed and we were unable to recover it. 00:24:37.072 [2024-07-24 20:52:32.458778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.458807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.458933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.458958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.459061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.459086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.459215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.459240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.459382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.459408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.459576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.459601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.459709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.459734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.459840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.459865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.459974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.460000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.460107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.460132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.460248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.460274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.460399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.460425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.460550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.460575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.460682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.460707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.460879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.460904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.461010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.461035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.461169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.461194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.461331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.461357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.461466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.461491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.461649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.461674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.461814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.461839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.461964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.461989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.462123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.462149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.462260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.462286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.462425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.462450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.462578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.462604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.462713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.462738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.462873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.462902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.463040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.073 [2024-07-24 20:52:32.463064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.073 qpair failed and we were unable to recover it. 00:24:37.073 [2024-07-24 20:52:32.463163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.463189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.463327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.463353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.463489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.463514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.463618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.463643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.463781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.463806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.463911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.463936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.464044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.464069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.464235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.464267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.464372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.464398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.464503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.464528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.464638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.464663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.464823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.464848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.465015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.465041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.465148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.465173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.465277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.465302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.465436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.465461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.465562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.465588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.465700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.465725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.465882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.465907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.466032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.466157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.466320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.466456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.466585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.466716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.466852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.466999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.467024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.467178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.467203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.467345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.467370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.467503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.467528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.074 [2024-07-24 20:52:32.467661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.074 [2024-07-24 20:52:32.467686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.074 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.467823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.467847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.467975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.468004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.468132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.468157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.468283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.468308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.468474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.468501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.468686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.468710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.468812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.468853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.469025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.469053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.469202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.469231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.469403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.469428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.469539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.469565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.469682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.469707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.469867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.469892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.470026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.470051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.470180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.470205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.470331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.470357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.470456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.470481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.470613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.470638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.470742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.470767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.470901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.470926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.471063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.471089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.471218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.471252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.471361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.471387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.471494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.471520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.471630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.471655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.471750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.471775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.471909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.471934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.472033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.472058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.472162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.472188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.472324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.472351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.075 qpair failed and we were unable to recover it. 00:24:37.075 [2024-07-24 20:52:32.472507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.075 [2024-07-24 20:52:32.472532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.472705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.472731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.472867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.472892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.473028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.473053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.473186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.473211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.473349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.473379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.473482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.473507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.473617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.473642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.473776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.473801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.473899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.473924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.474028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.474053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.474234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.474281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.474419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.474446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.474553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.474578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.474686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.474711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.474850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.474877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.474985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.475011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.475123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.475149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.475264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.475290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.475411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.475437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.475553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.475580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.076 [2024-07-24 20:52:32.475715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.076 [2024-07-24 20:52:32.475740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.076 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.475844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.475870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.475999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.476025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.476160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.476186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.476320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.476362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.476501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.476528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.476680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.476722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.476827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.476853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.477020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.477046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.477174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.477199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.477311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.477337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.477499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.477529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.477668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.477694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.477799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.477824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.477937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.477964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.478070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.478096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.478232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.478280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.478400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.478428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.478598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.478626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.478768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.478793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.478929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.478953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.479088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.479113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.479249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.479276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.479409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.479434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.479557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.479585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.479766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.479794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.479966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.480018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.480161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.480189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.480322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.480348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.480460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.480614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.480639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.480795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.480822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.481027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.077 [2024-07-24 20:52:32.481054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.077 qpair failed and we were unable to recover it. 00:24:37.077 [2024-07-24 20:52:32.481199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.481226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.481378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.481403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.481504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.481529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.481665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.481690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.481824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.481851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.482010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.482042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.482179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.482205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.482391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.482418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.482578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.482603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.482755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.482783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.482954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.482982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.483188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.483215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.483355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.483380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.483563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.483591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.483717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.483742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.483883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.483908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.484087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.484112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.484272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.484314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.484475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.484500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.484669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.484697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.484861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.484888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.485025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.485053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.485168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.485195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.485327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.485352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.485460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.485485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.485592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.485617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.485787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.485814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.485926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.485954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.486066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.486094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.486253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.486278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.486416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.486441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.486592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.486620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.078 qpair failed and we were unable to recover it. 00:24:37.078 [2024-07-24 20:52:32.486805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.078 [2024-07-24 20:52:32.486830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.487023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.487048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.487218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.487249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.487358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.487383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.487490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.487515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.487643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.487668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.487780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.487806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.487940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.487968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.488137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.488165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.488292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.488318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.488432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.488458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.488616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.488641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.488827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.488855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.488973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.489001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.489135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.489164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.489332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.489357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.489491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.489533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.489697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.489739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.489855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.489897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.490041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.490068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.490170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.490198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.490385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.490411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.490528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.490556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.490729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.490757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.490904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.490932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.491104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.491131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.491287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.491313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.491487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.491512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.491674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.491716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.491845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.491873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.491993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.492034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.492168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.492196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.492364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.492389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.492531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.079 [2024-07-24 20:52:32.492562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.079 qpair failed and we were unable to recover it. 00:24:37.079 [2024-07-24 20:52:32.492692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.492717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.492931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.492959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.493170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.493198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.493394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.493420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.493538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.493566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.493700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.493726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.493828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.493853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.493978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.494008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.494144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.494170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.494339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.494365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.494504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.494529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.494698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.494723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.494876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.494904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.495061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.495103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.495228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.495260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.495392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.495418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.495565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.495607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.495736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.495761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.495899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.495924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.496039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.496063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.496166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.496191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.496326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.496352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.496453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.496478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.496603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.496631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.496757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.496785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.496894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.496922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.497076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.497104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.497293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.497318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.497427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.497452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.497597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.497637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.497788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.497815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.497939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.497968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.498103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.498132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.498251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.080 [2024-07-24 20:52:32.498295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.080 qpair failed and we were unable to recover it. 00:24:37.080 [2024-07-24 20:52:32.498432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.498457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.498568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.498593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.498691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.498716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.498827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.498852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.499007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.499051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.499208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.499233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.499362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.499387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.499487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.499512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.499622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.499646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.499744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.499769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.499896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.499921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.500053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.500097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.500240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.500273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.500376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.500401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.500508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.500537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.500672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.500697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.500799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.500825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.500927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.500953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.501107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.501135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.501268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.501295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.501439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.501465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.501598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.501623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.501762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.501786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.501921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.501946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.502143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.502170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.502328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.502353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.502467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.502492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.502631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.502655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.502766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.502792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.502924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.502948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.503113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.503137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.503252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.503278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.503384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.503409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.503514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.081 [2024-07-24 20:52:32.503539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.081 qpair failed and we were unable to recover it. 00:24:37.081 [2024-07-24 20:52:32.503670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.503694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.503825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.503850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.503977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.504002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.504132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.504156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.504300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.504328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.504469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.504496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.504643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.504667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.504827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.504855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.505017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.505044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.505164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.505188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.505337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.505373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.505563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.505591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.505744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.505769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.505940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.505968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.506088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.506115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.506281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.506306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.506473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.506498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.506632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.506656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.506803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.506828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.506961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.506985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.507114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.507138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.507267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.507293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.507396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.507421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.507546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.507571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.507667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.507691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.507824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.507849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.508008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.508034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.508164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.508188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.508329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.082 [2024-07-24 20:52:32.508355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.082 qpair failed and we were unable to recover it. 00:24:37.082 [2024-07-24 20:52:32.508483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.508508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.508659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.508683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.508853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.508878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.509016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.509044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.509169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.509194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.509316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.509341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.509462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.509486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.509596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.509620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.509748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.509773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.509922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.509949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.510105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.510129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.510238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.510269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.510373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.510398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.510517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.510542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.510699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.510739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.510878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.510905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.511028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.511052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.511177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.511201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.511317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.511343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.511501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.511529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.511635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.511676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.511828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.511856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.512008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.512032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.512160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.512185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.512365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.512393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.512570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.512595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.512709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.512751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.512931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.512959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.513077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.513101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.513259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.513284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.513431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.513456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.513591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.513616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.513721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.513745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.513855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.513880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.513985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.514010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.514143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.514168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.083 qpair failed and we were unable to recover it. 00:24:37.083 [2024-07-24 20:52:32.514325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.083 [2024-07-24 20:52:32.514350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.514481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.514506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.514622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.514647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.514782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.514809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.514959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.514983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.515086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.515111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.515270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.515298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.515436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.515462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.515643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.515671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.515854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.515879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.516033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.516058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.516208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.516236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.516430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.516458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.516614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.516639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.516772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.516797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.516931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.516956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.517160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.517185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.517300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.517324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.517425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.517450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.517580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.517604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.517717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.517741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.517855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.517879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.518005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.518029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.518144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.518169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.518312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.518340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.518493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.518519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.518657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.518683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.518791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.518816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.518952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.518977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.519085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.519109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.519236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.519268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.519370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.519394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.519504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.519529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.519635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.519659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.519790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.519815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.519983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.520008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.520169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.084 [2024-07-24 20:52:32.520195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.084 qpair failed and we were unable to recover it. 00:24:37.084 [2024-07-24 20:52:32.520349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.520373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.520480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.520505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.520639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.520663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.520817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.520842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.520979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.521004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.521140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.521165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.521330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.521356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.521455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.521479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.521648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.521672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.521786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.521810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.521968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.521993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.522131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.522155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.522256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.522281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.522380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.522405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.522544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.522683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.522707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.522810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.522834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.522995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.523023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.523147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.523171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.523280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.523306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.523460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.523489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.523667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.523692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.523851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.523879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.524000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.524028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.524164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.524189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.524298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.524324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.524458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.524482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.524614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.524638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.524747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.524771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.524921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.524949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.525132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.525157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.525333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.525362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.525470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.525497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.525662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.525687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.525800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.525825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.526003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.526027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.085 [2024-07-24 20:52:32.526157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.085 [2024-07-24 20:52:32.526181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.085 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.526309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.526334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.526471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.526495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.526628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.526653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.526805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.526848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.527018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.527045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.527194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.527219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.527358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.527383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.527488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.527530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.527689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.527713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.527870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.527911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.086 [2024-07-24 20:52:32.528054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.086 [2024-07-24 20:52:32.528082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.086 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.528238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.528269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.528370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.528394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.528512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.528539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.528694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.528718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.528847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.528888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.529012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.529040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.529169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.529193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.529352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.529391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.087 [2024-07-24 20:52:32.529507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.087 [2024-07-24 20:52:32.529535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.087 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.529700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.529726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.529891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.529917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.530019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.530045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.530177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.530202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.530312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.530339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.530501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.530543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.530674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.530699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.530858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.530884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.531025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.531052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.531179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.088 [2024-07-24 20:52:32.531203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.088 qpair failed and we were unable to recover it. 00:24:37.088 [2024-07-24 20:52:32.531360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.089 [2024-07-24 20:52:32.531386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.089 qpair failed and we were unable to recover it. 00:24:37.089 [2024-07-24 20:52:32.531538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.089 [2024-07-24 20:52:32.531565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.089 qpair failed and we were unable to recover it. 00:24:37.089 [2024-07-24 20:52:32.531749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.089 [2024-07-24 20:52:32.531774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.089 qpair failed and we were unable to recover it. 00:24:37.089 [2024-07-24 20:52:32.531911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.089 [2024-07-24 20:52:32.531935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.089 qpair failed and we were unable to recover it. 00:24:37.089 [2024-07-24 20:52:32.532037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.089 [2024-07-24 20:52:32.532062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.089 qpair failed and we were unable to recover it. 00:24:37.089 [2024-07-24 20:52:32.532199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.089 [2024-07-24 20:52:32.532224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.091 qpair failed and we were unable to recover it. 00:24:37.091 [2024-07-24 20:52:32.532371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.091 [2024-07-24 20:52:32.532397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.091 qpair failed and we were unable to recover it. 00:24:37.091 [2024-07-24 20:52:32.532499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.091 [2024-07-24 20:52:32.532523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.091 qpair failed and we were unable to recover it. 00:24:37.091 [2024-07-24 20:52:32.532628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.091 [2024-07-24 20:52:32.532652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.091 qpair failed and we were unable to recover it. 00:24:37.091 [2024-07-24 20:52:32.532758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.091 [2024-07-24 20:52:32.532782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.091 qpair failed and we were unable to recover it. 00:24:37.091 [2024-07-24 20:52:32.532884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.091 [2024-07-24 20:52:32.532909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.091 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.533038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.533062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.533207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.533268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.533454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.533482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.533619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.533645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.533753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.533802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.533957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.534144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.534171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.534307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.092 [2024-07-24 20:52:32.534354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.092 qpair failed and we were unable to recover it. 00:24:37.092 [2024-07-24 20:52:32.534483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.534510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.534634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.534658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.534771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.534797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.534958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.534986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.535162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.535187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.535337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.535366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.535528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.535553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.535690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.535715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.535844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.535885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.093 [2024-07-24 20:52:32.536029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.093 [2024-07-24 20:52:32.536057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.093 qpair failed and we were unable to recover it. 00:24:37.098 [2024-07-24 20:52:32.536235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.098 [2024-07-24 20:52:32.536286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.098 qpair failed and we were unable to recover it. 00:24:37.098 [2024-07-24 20:52:32.536448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.098 [2024-07-24 20:52:32.536473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.098 qpair failed and we were unable to recover it. 00:24:37.098 [2024-07-24 20:52:32.536594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.098 [2024-07-24 20:52:32.536622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.098 qpair failed and we were unable to recover it. 00:24:37.098 [2024-07-24 20:52:32.536771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.098 [2024-07-24 20:52:32.536796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.098 qpair failed and we were unable to recover it. 00:24:37.098 [2024-07-24 20:52:32.536972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.098 [2024-07-24 20:52:32.537041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.098 qpair failed and we were unable to recover it. 00:24:37.098 [2024-07-24 20:52:32.537159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.098 [2024-07-24 20:52:32.537188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.098 qpair failed and we were unable to recover it. 00:24:37.098 [2024-07-24 20:52:32.537373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.098 [2024-07-24 20:52:32.537399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.098 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.537511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.537552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.099 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.537693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.537721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.099 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.537899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.537924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.099 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.538035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.538060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.099 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.538220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.538252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.099 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.538361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.538386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.099 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.538534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.538563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.099 qpair failed and we were unable to recover it. 00:24:37.099 [2024-07-24 20:52:32.538759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.099 [2024-07-24 20:52:32.538783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.538892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.538917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.539048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.539073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.539203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.539228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.539338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.539365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.539470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.539495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.539633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.539658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.539785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.539811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.539936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.539961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.540097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.540121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.100 qpair failed and we were unable to recover it. 00:24:37.100 [2024-07-24 20:52:32.540228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.100 [2024-07-24 20:52:32.540277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.540409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.101 [2024-07-24 20:52:32.540434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.540585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.101 [2024-07-24 20:52:32.540613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.540757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.101 [2024-07-24 20:52:32.540782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.540909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.101 [2024-07-24 20:52:32.540934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.541090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.101 [2024-07-24 20:52:32.541118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.541247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.101 [2024-07-24 20:52:32.541275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.541436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.101 [2024-07-24 20:52:32.541462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.101 qpair failed and we were unable to recover it. 00:24:37.101 [2024-07-24 20:52:32.541574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.541599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.541732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.541758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.541860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.541885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.542053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.542081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.542309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.542335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.542435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.542460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.542591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.542619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.542769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.542794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.542896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.542921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.543100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.543125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.543248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.543274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.102 [2024-07-24 20:52:32.543418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.102 [2024-07-24 20:52:32.543457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.102 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.543593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.543624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.543783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.543809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.543961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.543989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.544129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.544155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.544288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.544314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.544443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.544468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.544597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.544622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.544729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.544756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.544891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.544917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.545042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.545082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.545187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.545212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.545355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.545559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.545587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.545718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.545744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.545887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.545912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.546013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.546037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.546176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.546201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.546317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.546343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.103 [2024-07-24 20:52:32.546484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.103 [2024-07-24 20:52:32.546509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.103 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.546643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.546667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.546770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.546795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.546950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.546978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.547160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.547185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.547363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.547401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.547539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.547569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.547720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.547746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.547887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.547914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.548068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.548096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.548229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.548263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.548395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.548420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.548529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.548554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.548702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.548729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.548888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.548929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.549100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.549128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.549286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.549313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.549448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.549492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.549637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.549665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.549819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.549845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.549972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.550012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.550199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.550224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.550369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.550394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.550530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.550556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.550738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.550763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.550926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.550951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.551129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.551157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.551319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.551344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.551473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.551498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.551629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.551670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.551808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.551836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.551995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.552020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.552168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.552201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.552363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.552389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.552519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.552544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.552705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.552730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.552837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.552862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.552973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.104 [2024-07-24 20:52:32.552998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.104 qpair failed and we were unable to recover it. 00:24:37.104 [2024-07-24 20:52:32.553135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.553160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.553310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.553339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.553494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.553519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.553623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.553648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.553810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.553852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.553988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.554013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.554149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.554191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.554356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.554381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.554521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.554546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.554683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.554708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.105 qpair failed and we were unable to recover it. 00:24:37.105 [2024-07-24 20:52:32.554819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.105 [2024-07-24 20:52:32.554846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.555028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.555053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.555253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.555296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.555409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.555435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.555568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.555593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.555723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.555764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.555913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.555941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.556119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.556144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.556260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.556287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.556449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.556474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.556597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.106 [2024-07-24 20:52:32.556623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.106 qpair failed and we were unable to recover it. 00:24:37.106 [2024-07-24 20:52:32.556762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.556789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.107 qpair failed and we were unable to recover it. 00:24:37.107 [2024-07-24 20:52:32.556920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.556945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.107 qpair failed and we were unable to recover it. 00:24:37.107 [2024-07-24 20:52:32.557059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.557084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.107 qpair failed and we were unable to recover it. 00:24:37.107 [2024-07-24 20:52:32.557219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.557251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.107 qpair failed and we were unable to recover it. 00:24:37.107 [2024-07-24 20:52:32.557385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.557410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.107 qpair failed and we were unable to recover it. 00:24:37.107 [2024-07-24 20:52:32.557523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.557549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.107 qpair failed and we were unable to recover it. 00:24:37.107 [2024-07-24 20:52:32.557676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.557716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.107 qpair failed and we were unable to recover it. 00:24:37.107 [2024-07-24 20:52:32.557842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.107 [2024-07-24 20:52:32.557870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.558030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.558055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.558187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.558211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.558328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.558353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.558452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.558477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.558640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.558681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.558830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.558863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.558984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.559009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.559171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.559196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.559331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.559357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.559473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.559498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.559637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.559662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.559818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.559859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.108 [2024-07-24 20:52:32.559992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.108 [2024-07-24 20:52:32.560019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.108 qpair failed and we were unable to recover it. 00:24:37.109 [2024-07-24 20:52:32.560156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-24 20:52:32.560182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.109 qpair failed and we were unable to recover it. 00:24:37.109 [2024-07-24 20:52:32.560361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-24 20:52:32.560391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.109 qpair failed and we were unable to recover it. 00:24:37.109 [2024-07-24 20:52:32.560574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-24 20:52:32.560600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.109 qpair failed and we were unable to recover it. 00:24:37.109 [2024-07-24 20:52:32.560789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-24 20:52:32.560818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.109 qpair failed and we were unable to recover it. 00:24:37.109 [2024-07-24 20:52:32.560965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.109 [2024-07-24 20:52:32.560993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.561117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.561164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.561308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.561335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.561470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.561496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.561661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.561686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.561857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.561886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.562034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.562062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.562218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.562248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.562385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.562411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.562510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.562536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.562667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.562692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.562855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.562881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.563045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.563069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.563202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.563227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.563383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.563408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.563594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.563622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.563803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.563828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.563981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.564009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.564131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.564159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.564321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.564347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.564453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.564480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.564710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.564739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.564924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.564949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.565060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.565085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.565221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.565254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.565387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.565412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.565548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.565573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.565725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.565753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.565934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.565963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.566108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.566137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.566324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.566350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.566496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.566522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.566655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.566681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.566843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.566868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.566999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.567024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.567183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.567226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.567394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.567423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.567573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.567599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.567737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.567779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.567915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.567946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.568126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.568152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.568295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.568322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.568534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.568559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.568701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.568729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.568839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.568881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.569044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.569069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.114 [2024-07-24 20:52:32.569228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.114 [2024-07-24 20:52:32.569258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.114 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.569404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.569432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.569571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.569599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.569761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.569788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.569927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.569955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.570104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.570134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.570276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.570302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.570466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.570494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.570668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.570697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.570877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.570902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.571009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.571051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.571192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.571220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.571412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.571444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.571560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.571592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.571707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.571734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.115 [2024-07-24 20:52:32.571897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.115 [2024-07-24 20:52:32.571923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.115 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.572026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.572051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.572187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.572212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.572328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.572354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.572488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.572514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.572683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.572709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.572818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.572844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.573003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.573052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.573212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.573262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.573445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.573480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.573644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.573682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.573849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.573885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.574053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.574086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.574225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.574271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.574444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.574481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.389 [2024-07-24 20:52:32.574662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.389 [2024-07-24 20:52:32.574695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.389 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.574845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.574878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.575003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.575036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.575216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.575257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.575395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.575428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.575586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.575620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.575826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.575859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.575993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.576025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.576201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.576270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.576426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.576460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.576584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.576610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.576712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.576737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.576840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.576865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.576997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.577024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.577185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.577213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.577378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.577403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.577516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.577541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.577647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.577672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.577773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.577799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.577938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.577965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.578093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.578121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.578255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.578283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.578418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.578444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.578631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.578659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.578787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.578812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.578918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.578943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.579125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.579153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.579301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.579326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.579456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.579481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.579665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.579694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.579879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.579904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.580053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.580081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.580223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.580264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.580403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.580428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.580588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.580613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.580748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.580776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.580963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.580988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.581140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.581168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.581316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.581342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.581449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.581474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.581600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.581625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.581800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.581828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.581991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.582016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.582143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.582168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.582304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.390 [2024-07-24 20:52:32.582333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.390 qpair failed and we were unable to recover it. 00:24:37.390 [2024-07-24 20:52:32.582488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.582513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.582694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.582722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.582869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.582897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.583051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.583076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.583219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.583268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.583413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.583443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.583577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.583602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.583738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.583763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.583894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.583924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.584102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.584127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.584263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.584307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.584454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.584481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.584638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.584663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.584841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.584869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.584995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.585023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.585158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.585184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.585288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.585314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.585465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.585493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.585642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.585667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.585805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.585847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.586038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.586063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.586231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.586263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.586441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.586469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.586616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.586645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.586769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.586795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.586958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.586998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.587142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.587170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.587301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.587331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.587447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.587473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.587584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.587609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.587739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.587764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.587903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.587928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.588084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.588126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.588255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.588281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.588441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.588482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.588593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.588622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.588784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.588809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.588944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.588969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.589129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.589157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.589344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.589369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.589550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.589579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.589735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.589763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.589882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.589907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.590073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.590117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.590292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.391 [2024-07-24 20:52:32.590320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.391 qpair failed and we were unable to recover it. 00:24:37.391 [2024-07-24 20:52:32.590443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.590468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.590605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.590630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.590786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.590814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.590972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.590997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.591104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.591129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.591313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.591341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.591466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.591492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.591632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.591657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.591815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.591843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.592007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.592034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.592134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.592160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.592355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.592381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.592515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.592541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.592658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.592683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.592841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.592867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.592999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.593024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.593130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.593157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.593319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.593349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.593476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.593502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.593641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.593667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.593831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.593859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.594015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.594040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.594179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.594225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.594387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.594415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.594569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.594595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.594750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.594775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.594877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.594902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.595033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.595058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.595161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.595185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.595324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.595350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.595455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.595481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.595587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.595612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.595756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.595785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.595944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.595970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.596149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.596177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.596322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.596351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.596487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.596513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.596679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.596722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.596865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.596894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.597049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.597074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.597204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.597229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.597421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.597449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.597601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.597627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.597764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.597806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.597958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.392 [2024-07-24 20:52:32.597986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.392 qpair failed and we were unable to recover it. 00:24:37.392 [2024-07-24 20:52:32.598115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.598140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.598268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.598294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.598488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.598513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.598713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.598773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.598929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.598958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.599108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.599136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.599269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.599296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.599433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.599459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.599645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.599673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.599825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.599854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.600004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.600032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.600207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.600235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.600387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.600413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.600575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.600600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.600755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.600783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.600909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.600934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.601066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.601091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.601291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.601320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.601452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.601477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.601613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.601639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.601772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.601797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.601943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.601967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.602111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.602137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.602271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.602313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.602458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.602483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.602592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.602617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.602742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.602767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.602894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.602919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.603049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.603090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.603250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.603279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.603425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.603451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.603567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.603593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.603759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.603787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.603969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.603994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.604121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.604151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.604336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.604363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.604474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.604499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.393 qpair failed and we were unable to recover it. 00:24:37.393 [2024-07-24 20:52:32.604604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.393 [2024-07-24 20:52:32.604630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.604812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.604841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.604990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.605017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.605154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.605195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.605339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.605367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.605522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.605549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.605733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.605761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.605926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.605969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.606130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.606156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.606294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.606337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.606458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.606486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.606615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.606641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.606752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.606777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.606959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.606986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.607139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.607165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.607280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.607307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.607461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.607489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.607672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.607698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.607830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.607855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.607965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.607992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.608123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.608149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.608311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.608340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.608486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.608515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.608669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.608695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.608876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.608904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.609054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.609084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.609345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.609371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.609482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.609508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.609636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.609664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.609846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.609872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.610021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.610049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.610256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.610285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.610435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.610460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.610598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.610623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.610788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.610816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.610964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.610989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.611128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.611169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.611315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.611344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.611499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.611524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.611653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.611693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.611877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.611932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.612059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.612084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.612215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.612240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.612411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.612437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.612583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.612611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.612769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.612795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.612905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.612931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.613068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.613102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.613257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.613286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.613396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.613425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.613613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.613639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.613792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.613822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.614003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.614035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.614162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.614188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.394 qpair failed and we were unable to recover it. 00:24:37.394 [2024-07-24 20:52:32.614336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.394 [2024-07-24 20:52:32.614363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.614524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.614552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.614678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.614704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.614814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.614839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.615005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.615033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.615202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.615230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.615416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.615441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.615622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.615681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.615834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.615860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.615957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.615983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.616172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.616201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.616370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.616396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.616548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.616578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.616793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.616849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.616985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.617029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.617182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.617211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.617373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.617401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.617533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.617559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.617694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.617737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.617962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.618014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.618198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.618224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.618395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.618423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.618582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.618609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.618791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.618816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.618929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.618955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.619071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.619097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.619261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.619288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.619417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.619442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.619574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.619603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.619738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.619764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.619904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.619929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.620059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.620089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.620253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.620279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.620445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.620492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.620672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.620697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.620838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.620864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.621039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.621067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.621238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.621274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.621451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.621477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.621610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.621635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.621860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.621911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.622057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.622082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.622180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.622205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.622368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.622396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.622580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.622605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.622743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.622767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.622926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.622953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.623143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.623169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.623270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.623312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.623450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.623477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.623635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.623661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.623770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.623795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.623903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.623930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.624094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.624119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.624280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.624309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.624484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.624512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.624667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.624692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.624873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.624900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.625060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.625085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.625186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.625210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.625379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.625404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.625549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.625577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.625734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.395 [2024-07-24 20:52:32.625758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.395 qpair failed and we were unable to recover it. 00:24:37.395 [2024-07-24 20:52:32.625932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.625959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.626080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.626108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.626228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.626262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.626373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.626398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.626592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.626617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.626754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.626780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.626893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.626919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.627048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.627090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.627240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.627275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.627401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.627427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.627577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.627610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.627793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.627818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.627951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.627976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.628112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.628137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.628279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.628305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.628410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.628451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.628620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.628647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.628825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.628850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.628974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.629002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.629176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.629204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.629389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.629415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.629552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.629577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.629703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.629730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.629890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.629915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.630028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.630053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.630178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.630203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.630341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.630368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.630518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.630546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.630718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.630746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.630862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.630888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.631049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.631074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.631272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.631297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.631425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.631450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.631567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.631592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.631755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.631784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.631914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.631939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.632043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.632068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.632220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.632257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.632405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.632430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.632579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.632604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.632731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.632756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.632905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.632930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.633062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.633087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.633221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.633256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.633445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.633470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.633623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.633651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.633804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.633831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.633991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.634015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.634143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.634186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.634326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.634355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.634490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.634520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.634657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.634684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.634840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.634868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.635010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.635035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.635171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.635212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.635373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.635399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.635507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.396 [2024-07-24 20:52:32.635532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.396 qpair failed and we were unable to recover it. 00:24:37.396 [2024-07-24 20:52:32.635693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.635735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.635882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.635910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.636059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.636084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.636213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.636279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.636400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.636429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.636567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.636592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.636723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.636748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.636930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.636958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.637083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.637108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.637253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.637279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.637380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.637406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.637569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.637594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.637726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.637754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.637926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.637954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.638080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.638106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.638220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.638252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.638442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.638470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.638629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.638654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.638828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.638856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.639007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.639034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.639199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.639225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.639340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.639366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.639484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.639509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.639643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.639669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.639806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.639831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.639967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.639992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.640126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.640151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.640257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.640283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.640410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.640438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.640621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.640647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.640750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.640775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.640915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.640940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.641054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.641081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.641217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.641255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.641436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.641464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.641642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.641667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.641851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.641879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.642026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.642054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.642198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.642227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.642413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.642438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.642589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.642617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.642800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.642824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.642964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.642989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.643148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.643173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.643307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.643333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.643496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.643521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.643650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.643675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.643818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.643844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.643946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.643973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.644134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.644162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.644346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.644373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.644473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.644518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.644663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.644691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.644828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.644853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.645022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.645047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.645208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.645236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.645381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.645406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.645567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.645592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.645716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.645745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.645915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.645940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.646079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.646104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.646285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.646314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.646495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.397 [2024-07-24 20:52:32.646520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.397 qpair failed and we were unable to recover it. 00:24:37.397 [2024-07-24 20:52:32.646667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.646695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.646817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.646845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.646966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.646991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.647125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.647150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.647289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.647316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.647446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.647471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.647608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.647652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.647807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.647836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.647993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.648018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.648148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.648189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.648347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.648378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.648514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.648539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.648686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.648711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.648811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.648836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.648947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.648989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.649132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.649159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.649284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.649324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.649427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.649453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.649634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.649662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.649831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.649858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.650015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.650040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.650144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.650169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.650332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.650361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.650487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.650513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.650681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.650723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.650842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.650870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.651028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.651054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.651187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.651213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.651332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.651358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.651500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.651526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.651654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.651679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.651826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.651855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.652009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.652035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.652140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.652165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.652315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.652343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.652500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.652525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.652706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.652733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.652890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.652918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.653105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.653130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.653283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.653312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.653461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.653503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.653662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.653687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.653863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.653890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.654035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.654063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.654239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.654276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.654387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.654412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.654518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.654543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.654644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.654668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.654802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.654828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.654996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.655021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.655151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.655180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.655318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.655345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.655473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.655501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.655653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.655678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.655797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.655821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.655931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.655955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.656081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.656109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.656256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.656307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.656418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.656444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.656551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.656576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.656712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.656752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.656869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.656898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.398 [2024-07-24 20:52:32.657052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.398 [2024-07-24 20:52:32.657077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.398 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.657211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.657264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.657417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.657445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.657573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.657597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.657736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.657761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.657904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.657929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.658123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.658147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.658257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.658299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.658417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.658447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.658596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.658621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.658762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.658787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.658944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.658971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.659125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.659151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.659290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.659332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.659478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.659507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.659639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.659664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.659821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.659861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.660001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.660029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.660215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.660240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.660433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.660462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.660635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.660663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.660820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.660845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.660982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.661023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.661173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.661200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.661365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.661391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.661528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.661570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.661685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.661713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.661868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.661893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.662059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.662211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.662239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.662404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.662430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.662530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.662555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.662713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.662741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.662896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.662920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.663109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.663137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.663257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.663287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.663474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.663499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.663601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.663645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.663796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.663823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.663973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.663998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.664100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.664126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.664312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.664341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.664510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.664534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.664647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.664690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.664867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.664895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.665018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.665042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.665180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.665205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.665369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.665397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.665522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.665546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.665681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.665706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.665870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.665900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.666053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.666077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.666212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.666261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.666415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.666445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.666572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.666597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.666762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.666805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.670410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.670450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.399 [2024-07-24 20:52:32.670578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.399 [2024-07-24 20:52:32.670607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.399 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.670725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.670751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.670909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.670938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.671057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.671082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.671239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.671276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.671436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.671465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.671596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.671621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.671728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.671754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.671910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.671938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.672089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.672114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.672218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.672253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.672411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.672442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.672585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.672611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.672742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.672786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.672954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.672979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.673113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.673138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.673252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.673280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.673393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.673418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.673581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.673606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.673740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.673783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.673907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.673934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.674063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.674087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.674224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.674261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.674426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.674454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.674610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.674636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.674753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.674779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.674906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.674930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.675059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.675085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.675222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.675257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.675382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.675410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.675543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.675569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.675732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.675757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.675919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.675946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.676110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.676135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.676270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.676313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.676453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.676481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.676633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.676659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.676760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.676785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.676947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.676975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.677102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.677129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.677264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.677290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.677401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.677426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.677556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.677581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.677708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.677733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.677840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.677865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.677969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.677993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.678100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.678126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.678256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.678282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.678441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.678466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.678642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.678670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.678815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.678843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.678999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.679027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.679158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.679201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.679364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.679390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.679527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.679552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.679664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.679707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.679858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.679886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.680045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.680070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.680179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.680206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.680380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.680409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.680585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.680609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.680717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.680758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.680913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.680941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.400 [2024-07-24 20:52:32.681093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.400 [2024-07-24 20:52:32.681119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.400 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.681272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.681300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.681462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.681487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.681628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.681652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.681758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.681784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.681943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.681973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.682126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.682150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.682282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.682308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.682468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.682496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.682656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.682682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.682799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.682824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.682930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.682955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.683079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.683109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.683217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.683249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.683441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.683470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.683656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.683681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.683846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.683881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.684029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.684067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.684200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.684227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.684353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.684379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.684550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.684581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.684707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.684734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.684839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.684865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.685040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.685066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.685196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.685222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.685390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.685416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.685586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.685614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.685767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.685792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.685951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.685982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.686141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.686183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.686311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.686338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.686508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.686551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.686689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.686735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.686899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.686925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.687022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.687066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.687217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.687254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.687411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.687437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.687543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.687585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.687713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.687741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.687870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.687897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.688004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.688031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.688162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.688192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.688396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.688422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.688532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.688558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.688662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.688689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.688789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.688815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.688916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.688942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.689121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.689148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.689286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.689312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.689441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.689466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.689581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.689609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.689774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.689799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.689936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.689961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.690058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.690083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.690251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.690277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.690463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.690491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.690599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.690627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.690755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.690780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.690885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.690910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.691016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.691042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.691210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.691235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.691434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.691462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.691606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.691634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.691788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.401 [2024-07-24 20:52:32.691813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.401 qpair failed and we were unable to recover it. 00:24:37.401 [2024-07-24 20:52:32.691945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.691986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.692107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.692135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.692289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.692315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.692449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.692490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.692635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.692668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.692807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.692833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.692968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.692993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.693099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.693124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.693256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.693302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.693469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.693495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.693611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.693637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.693742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.693768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.693904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.693929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.694114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.694143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.694273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.694300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.694472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.694515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.694637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.694666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.694845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.694870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.695005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.695031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.695158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.695187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.695348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.695382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.695514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.695557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.695748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.695773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.695882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.695908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.696053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.696094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.696220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.696259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.696407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.696432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.696562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.696588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.696777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.696805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.696990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.697015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.697155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.697180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.697331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.697375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.697502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.697527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.697658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.697684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.697865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.697893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.698036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.698061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.698201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.698257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.698446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.698474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.698596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.698621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.698730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.698756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.698897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.698922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.699028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.699053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.699189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.699214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.699351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.699379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.699538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.699568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.699710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.699735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.699867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.699892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.700036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.700062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.700202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.700229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.700402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.700432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.700610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.700635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.700732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.700772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.700917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.700945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.701141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.701168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.701305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.701331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.701464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.701491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.701632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.402 [2024-07-24 20:52:32.701658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.402 qpair failed and we were unable to recover it. 00:24:37.402 [2024-07-24 20:52:32.701777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.701803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.701963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.701992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.702169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.702194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.702360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.702386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.702486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.702511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.702717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.702743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.702869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.702897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.703034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.703062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.703190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.703216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.703341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.703366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.703503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.703528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.703668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.703693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.703825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.703870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.704010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.704039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.704188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.704218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.704392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.704418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.704571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.704599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.704777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.704802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.704983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.705011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.705124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.705152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.705297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.705323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.705430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.705456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.705616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.705644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.705780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.705805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.705933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.705958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.706067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.706093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.706199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.706225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.706407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.706435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.706597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.706625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.706782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.706808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.706951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.706995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.707135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.707164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.707349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.707375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.707513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.707539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.707673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.707699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.707799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.707824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.707984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.708027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.708213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.708238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.708379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.708404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.708540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.708583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.708767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.708792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.708899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.708924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.709062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.709087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.709251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.709279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.709458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.709483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.709637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.709665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.709783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.709811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.709958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.709984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.710161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.710190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.710315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.710344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.710530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.710562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.710713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.710740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.710914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.710942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.711094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.711120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.711263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.711294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.711456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.711481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.711628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.711654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.711783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.711809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.711918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.711946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.712078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.712103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.712212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.712237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.712403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.712429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.712559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.712584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.403 qpair failed and we were unable to recover it. 00:24:37.403 [2024-07-24 20:52:32.712759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.403 [2024-07-24 20:52:32.712787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.712903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.712931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.713126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.713154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.713314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.713341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.713480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.713514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.713653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.713679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.713789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.713815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.713960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.713988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.714146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.714172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.714335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.714361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.714524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.714553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.714713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.714738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.714874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.714918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.715061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.715089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.715213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.715239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.715402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.715428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.715585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.715613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.715790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.715816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.715953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.715995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.716137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.716167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.716304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.716330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.716461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.716486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.716637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.716666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.716818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.716844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.716947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.716972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.717104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.717132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.717310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.717336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.717466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.717520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.717628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.717657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.717811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.717836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.717955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.717980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.718108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.718141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.718280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.718306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.718438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.718465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.718612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.718640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.718787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.718812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.718944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.718987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.719101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.719130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.719254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.719280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.719410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.719435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.719602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.719629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.719793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.719819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.719947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.719977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.720123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.720151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.720299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.720325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.720458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.720484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.720634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.720661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.720835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.720861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.720973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.721015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.721194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.721222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.721365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.721391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.721522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.721547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.721697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.721726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.721867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.721892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.722022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.722047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.722220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.722259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.722428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.722454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.722592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.404 [2024-07-24 20:52:32.722636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.404 qpair failed and we were unable to recover it. 00:24:37.404 [2024-07-24 20:52:32.722817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.722846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.722991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.723016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.723145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.723170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.723336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.723365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.723515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.723541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.723673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.723717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.723864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.723892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.724040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.724066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.724203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.724229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.724403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.724431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.724571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.724597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.724695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.724721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.724907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.724935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.725095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.725124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.725227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.725259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.725392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.725417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.725518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.725543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.725674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.725700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.725831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.725860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.726018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.726044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.726149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.726174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.726295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.726320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.726468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.726494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.726604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.726630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.726801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.726827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.726964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.726989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.727121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.727164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.727326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.727353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.727488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.727520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.727626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.727651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.727757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.727783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.727894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.727919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.728051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.728091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.728240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.728274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.728409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.728435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.728604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.728630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.728790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.728816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.728948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.728974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.729100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.729143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.729298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.729328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.729458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.729484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.729629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.729656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.729800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.729829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.729987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.730012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.730175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.730201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.730385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.730410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.730543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.730568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.730699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.730742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.730861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.730890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.731046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.731072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.731213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.731270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.731393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.731422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.731606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.731632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.731767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.731797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.731901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.731927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.732034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.732060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.732195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.732236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.732383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.732412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.732564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.732589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.732692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.732718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.732874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.732902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.733056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.733082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.405 [2024-07-24 20:52:32.733180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.405 [2024-07-24 20:52:32.733205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.405 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.733346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.733375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.733573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.733599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.733701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.733727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.733851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.733880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.734047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.734076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.734181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.734210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.734376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.734402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.734548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.734573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.734682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.734708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.734842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.734867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.735036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.735062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.735189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.735214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.735359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.735387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.735530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.735555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.735661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.735686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.735828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.735857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.735993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.736019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.736161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.736203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.736327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.736357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.736513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.736538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.736685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.736723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.736887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.736916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.737044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.737069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.737177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.737202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.737344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.737369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.737509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.737533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.737641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.737683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.737809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.737837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.737970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.737996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.738135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.738160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.738281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.738311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.738412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.738438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.738544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.738569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.738701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.738727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.738869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.738894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.739035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.739061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.739175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.739201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.739350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.739376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.739478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.739504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.739667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.739695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.739825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.739850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.739954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.739980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.740113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.740138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.740271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.740307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.740443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.740469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.740629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.740655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.740758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.740785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.740945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.740987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.741124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.741153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.741319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.741346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.741478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.741503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.741661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.741690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.741838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.741863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.741998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.742023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.742175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.742204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.742345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.742371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.742474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.742501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.742649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.742692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.742880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.742907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.743030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.743057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.743166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.743193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.743310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.743338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.743448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.406 [2024-07-24 20:52:32.743476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:37.406 qpair failed and we were unable to recover it. 00:24:37.406 [2024-07-24 20:52:32.743619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.743646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.743822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.743847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.743980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.744005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.744166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.744208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.744376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.744402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.744507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.744532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.744694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.744724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.744854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.744883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.745047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.745090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.745236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.745296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.745461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.745487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.745639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.745668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.745818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.745846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.746000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.746025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.746132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.746159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.746287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.746330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.746470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.746495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.746606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.746631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.746735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.746761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.746898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.746923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.747054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.747098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.747255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.747299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.747433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.747459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.747602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.747628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.747765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.747790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.747924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.747950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.748044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.748070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.748235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.748271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.748395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.748421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.748583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.748608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.748770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.748799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.748926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.748952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.749056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.749082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.749192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.749217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.749347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.749386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.749494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.749521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.749657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.749683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.749804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.749847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.750026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.750064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.750179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.750206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.750352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.750378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.750490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.750519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.750658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.750682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.750836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.750860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.751012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.751058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.751198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.751224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.751337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.751363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.751481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.751510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.751659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.751702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.751885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.751927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.752035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.752061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.752170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.752195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.752325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.752351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.752457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.752482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.752615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.752641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.752773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.752800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.752961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.407 [2024-07-24 20:52:32.752987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.407 qpair failed and we were unable to recover it. 00:24:37.407 [2024-07-24 20:52:32.753117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.753143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.753261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.753299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.753418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.753446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.753602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.753627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.753764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.753789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.753921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.753947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.754050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.754077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.754209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.754234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.754349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.754374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.754515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.754559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.754702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.754730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.754854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.754882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.755018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.755046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.755173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.755198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.755370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.755396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.755499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.755540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.755714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.755742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.755870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.755912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.756039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.756067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.756194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.756219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.756375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.756413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.756544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.756582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.756752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.756781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.756953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.756982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.757116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.757160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.757316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.757342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.757468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.757493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.757645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.757708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.757937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.757989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.758132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.758160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.758301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.758327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.758495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.758533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.758649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.758676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.758781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.758806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.758940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.758968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.759114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.759143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.759267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.759310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.759440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.759464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.759571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.759596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.759732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.759758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.759888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.759917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.760092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.760120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.760237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.760295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.760407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.760432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.760589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.760614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.760745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.760773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.760954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.760982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.761116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.761144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.761304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.761330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.761468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.761493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.761699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.761724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.761895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.761923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.762066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.762094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.762238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.762268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.762372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.762398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.762498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.762523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.762680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.762705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.762818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.762843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.763017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.763046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.763210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.763235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.763399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.763424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.763579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.763619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.763780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.763806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.763933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.408 [2024-07-24 20:52:32.763975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.408 qpair failed and we were unable to recover it. 00:24:37.408 [2024-07-24 20:52:32.764145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.764173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.764306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.764332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.764483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.764508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.764629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.764655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.764852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.764880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.765018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.765045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.765186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.765214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.765382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.765409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.765528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.765563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.765698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.765724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.765869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.765894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.766084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.766112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.766252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.766278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.766386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.766411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.766545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.766570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.766696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.766724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.766896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.766925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.767070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.767098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.767295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.767321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.767467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.767493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.767641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.767669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.767797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.767829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.768041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.768069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.768196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.768224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.768416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.768442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.768581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.768607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.768746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.768771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.768963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.768991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.769130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.769155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.769253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.769279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.769389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.769415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.769519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.769551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.769692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.769734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.769852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.769880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.770001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.770043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.770193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.770221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.770360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.770386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.770547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.770572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.770695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.770724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.770896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.770925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.771069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.771098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.771226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.771270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.771425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.771450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.771579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.771604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.771730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.771773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.771948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.771975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.772140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.772168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.772333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.772359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.772487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.772528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.772657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.772682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.772821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.772862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.773035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.773063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.773179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.773205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.773309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.773334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.773465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.773490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.773599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.773624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.773749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.773774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.773938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.773966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.774131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.774159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.774319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.774346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.774511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.774536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.774647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.774672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.774786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.774815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.409 [2024-07-24 20:52:32.774976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.409 [2024-07-24 20:52:32.775004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.409 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.775168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.775194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.775330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.775374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.775559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.775587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.775734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.775759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.775865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.775890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.775992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.776017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.776175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.776200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.776338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.776367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.776519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.776550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.776674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.776700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.776860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.776901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.777031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.777058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.777251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.777277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.777386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.777429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.777572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.777600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.777725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.777751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.777927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.777955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.778106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.778134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.778297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.778323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.778424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.778449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.778647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.778675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.778827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.778852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.779004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.779031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.779208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.779236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.779386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.779411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.779515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.779544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.779745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.779773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.779956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.779981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.780129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.780156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.780310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.780335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.780434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.780460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.780571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.780596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.780752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.780780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.780928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.780953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.781057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.781082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.781226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.781260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.781418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.781443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.781543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.781568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.781685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.781713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.781851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.781877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.782032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.782058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.782171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.782197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.782300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.782326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.782426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.782451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.782581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.782606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.782703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.782728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.782854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.782879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.783049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.783075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.783208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.783233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.783393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.783421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.783580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.783605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.783730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.783755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.783863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.783889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.784027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.784055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.784206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.784231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.784363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.784388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.410 [2024-07-24 20:52:32.784537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.410 [2024-07-24 20:52:32.784563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.410 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.784665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.784691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.784829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.784854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.784985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.785017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.785166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.785195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.785337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.785363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.785490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.785522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.785652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.785676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.785856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.785884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.786026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.786054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.786206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.786252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.786389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.786431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.786569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.786597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.786744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.786769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.786908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.786933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.787036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.787060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.787184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.787209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.787327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.787352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.787499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.787524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.787657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.787683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.787839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.787864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.787988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.788017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.788161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.788186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.788346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.788375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.788487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.788519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.788677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.788702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.788863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.788889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.789087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.789113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.789251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.789276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.789415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.789458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.789646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.789671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.789806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.789832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.789963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.790005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.790178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.790205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.790342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.790368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.790503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.790532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.790662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.790688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.790823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.790853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.791002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.791029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.791153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.791180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.791323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.791349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.791479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.791505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.791676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.791704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.791833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.791859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.791990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.792014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.792119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.792144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.792282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.792307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.792416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.792441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.792565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.792593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.792725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.792750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.792845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.792870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.792997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.793025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.793204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.793232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.793394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.793419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.793538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.793566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.793694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.793719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.793852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.793877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.794000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.794028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.794196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.794223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.794361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.794387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.794522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.794548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.794682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.794707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.794837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.794880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.795034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.411 [2024-07-24 20:52:32.795062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.411 qpair failed and we were unable to recover it. 00:24:37.411 [2024-07-24 20:52:32.795177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.795202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.795324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.795350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.795501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.795530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.795688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.795713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.795879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.795904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.796059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.796087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.796254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.796281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.796443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.796472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.796619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.796648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.796783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.796808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.796947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.796972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.797104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.797130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.797272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.797299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.797434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.797459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.797622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.797656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.797804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.797830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.797962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.797992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.798162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.798190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.798338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.798365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.798497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.798522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.798681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.798709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.798832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.798857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.798971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.798997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.799177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.799205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.799350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.799375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.799563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.799591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.799731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.799759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.799889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.799915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.800044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.800069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.800224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.800257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.800377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.800402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.800502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.800527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.800680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.800708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.800857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.800883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.801010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.801035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.801140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.801166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.801300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.801326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.801437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.801462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.801575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.801601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.801737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.801762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.801889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.801914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.802028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.802057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.802224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.802255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.802387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.802429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.802580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.802608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.802768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.802796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.802932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.802975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.803122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.803150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.803304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.803330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.803471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.803515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.803686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.803714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.803837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.803861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.803990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.804016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.804175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.804203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.804375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.804401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.804572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.804600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.804745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.804772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.804929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.804954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.805089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.805132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.805259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.805288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.805451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.805476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.805612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.805655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.805825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.805853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.806032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.412 [2024-07-24 20:52:32.806057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.412 qpair failed and we were unable to recover it. 00:24:37.412 [2024-07-24 20:52:32.806198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.806226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.806360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.806388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.806562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.806587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.806742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.806770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.806911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.806939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.807130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.807158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.807289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.807315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.807424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.807450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.807597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.807622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.807732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.807774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.807960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.807986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.808144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.808169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.808320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.808349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.808521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.808549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.808730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.808755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.808930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.808958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.809087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.809116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.809235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.809265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.809437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.809484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.809641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.809669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.809852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.809877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.809986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.810011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.810156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.810181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.810298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.810324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.810430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.810455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.810553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.810578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.810675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.810700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.810834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.810859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.810994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.811031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.811200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.811225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.811350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.811375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.811497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.811525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.811695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.811719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.811854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.811897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.812039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.812067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.812221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.812265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.812393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.812421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.812603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.812629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.812730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.812755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.812882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.812907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.813089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.813116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.813250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.813276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.813406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.813431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.813594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.813622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.813773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.813798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.813935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.813976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.814154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.814182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.814343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.814369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.814471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.814496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.814653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.814681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.814815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.814840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.814984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.815009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.815175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.815203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.815356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.815382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.815481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.815506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.815662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.815687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.815824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.815849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.816011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.816036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.413 [2024-07-24 20:52:32.816167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.413 [2024-07-24 20:52:32.816195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.413 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.816329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.816355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.816512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.816536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.816695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.816723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.816859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.816884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.817043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.817068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.817217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.817251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.817410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.817436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.817534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.817559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.817745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.817773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.817896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.817921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.818026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.818051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.818167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.818196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.818335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.818361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.818505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.818530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.818717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.818746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.818893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.818917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.819042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.819066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.819226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.819262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.819388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.819413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.819572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.819598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.819710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.819736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.819870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.819895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.820073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.820101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.820217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.820268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.820456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.820482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.820594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.820619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.820769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.820796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.820944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.820973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.821131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.821156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.821317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.821346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.821508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.821533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.821708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.821736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.821845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.821873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.822028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.822053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.822181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.822223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.822358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.822384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.822518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.822544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.822652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.822677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.822862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.822890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.823014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.823039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.823174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.823201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.823363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.823392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.823546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.823572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.823671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.823697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.823848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.823876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.824030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.824055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.824231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.824264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.824412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.824440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.824583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.824608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.824737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.824762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.824930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.824958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.825117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.825142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.825259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.825285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.825392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.825417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.825592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.825618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.825750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.825778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.825918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.825945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.826074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.826098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.826238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.826268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.826393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.826422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.826587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.826612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.826731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.826756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.826861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.414 [2024-07-24 20:52:32.826887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.414 qpair failed and we were unable to recover it. 00:24:37.414 [2024-07-24 20:52:32.827025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.827050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.827153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.827178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.827392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.827418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.827522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.827546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.827645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.827670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.827806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.827836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.827962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.827987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.828101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.828126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.828262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.828288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.828423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.828449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.828563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.828588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.828734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.828762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.828919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.828944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.829125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.829153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.829271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.829300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.829457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.829481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.829579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.829604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.829752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.829780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.829928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.829952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.830056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.830082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.830263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.830316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.830425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.830450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.830571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.830596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.830788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.830816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.830964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.830989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.831099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.831125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.831298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.831326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.831481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.831506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.831648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.831673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.831783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.831809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.831918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.831943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.832050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.832075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.832218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.832257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.832413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.832439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.832612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.832640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.832805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.832830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.832959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.832984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.833110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.833135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.833272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.833300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.833485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.833516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.833626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.833668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.833844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.833872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.834025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.834050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.834179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.834219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.834375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.834401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.834537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.834564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.834710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.834735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.834877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.834902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.835035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.835060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.835213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.835248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.835405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.835433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.835574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.835599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.835729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.835754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.835867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.835892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.835995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.836020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.836148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.836173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.836324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.836350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.836481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.836506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.836635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.836676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.836818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.836847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.836978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.837003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.837140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.837165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.415 qpair failed and we were unable to recover it. 00:24:37.415 [2024-07-24 20:52:32.837300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.415 [2024-07-24 20:52:32.837326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.837492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.837517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.837668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.837696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.837843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.837872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.838025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.838051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.838202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.838230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.838419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.838448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.838595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.838620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.838748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.838773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.838959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.838987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.839170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.839194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.839325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.839354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.839499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.839527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.839665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.839690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.839798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.839823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.839967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.839995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.840148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.840173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.840305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.840331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.840453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.840480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.840634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.840659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.840758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.840783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.840928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.840956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.841092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.841118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.841276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.841301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.841458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.841486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.841641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.841666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.841778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.841803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.841934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.841958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.842089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.842114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.842292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.842320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.842458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.842486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.842672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.842698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.842807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.842832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.842938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.842963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.843108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.843136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.843296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.843322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.843452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.843477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.843618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.843642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.843772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.843801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.843938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.843967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.844153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.844178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.844329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.844357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.844471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.844499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.844684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.844709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.844847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.844874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.845046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.845073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.845237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.845278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.845465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.845493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.845647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.845675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.845795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.845820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.845920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.845945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.846098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.846126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.846283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.846314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.846415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.846440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.846600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.846628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.846777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.846802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.846936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.846978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.847089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.847117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.847262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.847288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.847418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.416 [2024-07-24 20:52:32.847443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.416 qpair failed and we were unable to recover it. 00:24:37.416 [2024-07-24 20:52:32.847601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.847628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.847782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.847807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.847903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.847929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.848055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.848083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.848204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.848258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.848383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.848408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.848514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.848540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.848668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.848693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.848826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.848868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.849041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.849069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.849194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.849219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.849366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.849391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.849567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.849591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.849688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.849714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.849852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.849876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.850031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.850058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.850220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.850249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.850412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.850440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.850585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.850613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.850745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.850774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.850888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.850913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.851070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.851098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.851230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.851260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.851429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.851454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.851627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.851655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.851773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.851798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.851893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.851918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.852069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.852097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.852264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.852290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.852392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.852417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.852583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.852611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.852764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.852789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.852920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.852962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.853113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.853141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.853291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.853321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.853501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.853528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.853709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.853737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.853890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.853915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.854050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.854092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.854205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.854232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.854383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.854407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.854540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.854565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.854727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.854756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.854940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.854965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.855144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.855172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.855318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.855344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.855471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.855500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.855605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.855629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.855788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.855816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.855946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.855971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.856132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.856156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.856334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.856360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.856494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.856519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.856640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.856665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.856825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.856853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.857005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.857030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.857133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.857159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.857319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.857348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.857526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.857551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.857658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.857699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.857845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.857873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.857993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.858018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.858188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.858229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.858408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.417 [2024-07-24 20:52:32.858436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.417 qpair failed and we were unable to recover it. 00:24:37.417 [2024-07-24 20:52:32.858558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.858582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.858686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.858711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.858856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.858884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.859035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.859059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.859165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.859190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.859316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.859344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.859481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.859506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.859636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.859661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.859782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.859810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.859993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.860017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.860175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.860203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.860369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.860395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.860502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.860527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.860640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.860665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.860773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.860798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.860955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.860980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.861156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.861184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.861342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.861381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.861519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.861544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.861692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.861717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.861872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.861900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.862030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.862072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.862214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.862248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.862410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.862442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.862571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.862596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.862754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.862779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.862883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.862907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.863040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.863065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.863211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.863239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.863362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.863391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.863542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.863567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.863701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.863743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.863863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.863890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.864072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.864097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.864236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.864269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.864373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.864398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.864504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.864529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.864669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.864694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.864852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.864880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.865036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.865061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.865171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.865195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.865344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.865370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.865470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.865495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.865681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.865709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.865853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.865880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.866039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.866064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.866214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.866249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.866406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.866435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.866584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.866609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.866739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.866779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.866933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.866961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.867093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.867117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.867217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.867247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.867415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.867442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.867596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.867621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.867722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.867747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.867856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.867881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.868042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.418 [2024-07-24 20:52:32.868067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.418 qpair failed and we were unable to recover it. 00:24:37.418 [2024-07-24 20:52:32.868210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.868237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.868402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.868430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.868572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.868597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.868760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.868785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.868946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.868975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.869104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.869129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.869282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.869307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.869432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.869462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.869627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.869652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.869800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.869825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.869921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.869946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.870101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.870129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.870281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.870309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.870454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.870480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.870614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.870639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.870737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.870762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.870914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.870944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.871077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.871103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.871235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.871267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.871415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.871440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.871548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.871573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.871706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.871731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.871860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.871884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.872038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.872063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.872177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.872202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.872397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.872425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.872552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.872578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.872691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.872717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.872883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.872908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.873069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.873093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.873251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.873280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.873428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.873456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.419 qpair failed and we were unable to recover it. 00:24:37.419 [2024-07-24 20:52:32.873582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.419 [2024-07-24 20:52:32.873607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.873711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.873742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.873900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.873925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.874030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.874056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.874217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.874269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.874420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.874448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.874578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.874603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.874770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.874795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.874954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.874997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.875150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.875175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.875281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.875307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.875476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.875504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.875635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.875661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.875796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.875822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.876007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.876035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.876194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.876219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.876435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.876465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.876598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.876626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.876756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.876781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.876940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.876965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.877144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.877172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.877354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.877379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.877480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.877505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.877631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.877659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.877813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.877838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.877965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.877990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.878120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.878145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.878303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.878329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.878437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.878462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.878592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.878620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.878778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.878803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.878918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.878943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.879129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.879157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.879281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.879306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.879421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.879446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.879571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.879598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.879754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.879779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.879915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.420 [2024-07-24 20:52:32.879940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.420 qpair failed and we were unable to recover it. 00:24:37.420 [2024-07-24 20:52:32.880070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.880095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.880263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.880289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.880417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.880442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.880608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.880633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.880813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.880842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.880981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.881005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.881130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.881157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.881281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.881307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.881470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.881495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.881658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.881686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.881830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.881855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.881991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.882016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.882150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.882175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.882279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.882305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.882476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.882501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.882626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.882654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.882801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.882825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.882958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.882983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.883174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.883202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.883337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.883363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.883499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.883525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.883711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.883739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.883899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.883924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.884052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.884077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.884209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.884233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.884364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.884389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.884491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.884516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.884635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.884663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.884842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.884867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.884981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.885009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.885186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.885214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.885392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.885421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.885572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.885600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.885753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.421 [2024-07-24 20:52:32.885781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.421 qpair failed and we were unable to recover it. 00:24:37.421 [2024-07-24 20:52:32.885916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.885941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.886080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.886105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.886233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.886269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.886420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.886446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.886549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.886575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.886744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.886772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.886893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.886918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.887073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.887099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.887255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.887284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.887444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.887469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.887653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.887681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.887801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.887829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.887979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.888004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.888113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.888153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.888298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.888327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.888513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.888538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.888674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.888699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.888898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.888926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.889081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.889106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.889263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.889298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.889424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.889452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.889587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.889612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.889718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.889742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.889865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.889893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.890019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.890044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.890187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.890212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.890325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.890350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.890512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.890537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.890658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.890686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.890799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.890827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.890960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.890985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.891114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.891139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.891295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.891324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.891477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.891502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.891603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.891628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.891757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.891785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.422 qpair failed and we were unable to recover it. 00:24:37.422 [2024-07-24 20:52:32.891934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.422 [2024-07-24 20:52:32.891958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.892082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.892107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.892274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.892313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.892457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.892482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.892620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.892645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.892745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.892770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.892928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.892952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.893107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.893136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.893292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.893318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.893453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.893478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.893658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.893685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.893794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.893822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.893972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.893998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.894103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.894129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.894263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.894292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.894419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.894444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.894591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.894616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.894769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.894797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.894978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.895003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.895151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.895179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.895357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.895385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.895569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.895594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.895743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.895770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.895921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.895945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.896073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.896097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.896230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.896262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.896420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.896445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.896546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.896571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.896711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.423 [2024-07-24 20:52:32.896736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.423 qpair failed and we were unable to recover it. 00:24:37.423 [2024-07-24 20:52:32.896882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.896914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.897033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.897058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.897194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.897219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.897361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.897387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.897517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.897542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.897649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.897675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.897796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.897824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.897964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.897989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.898147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.898172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.898334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.898368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.898522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.898547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.898686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.898712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.898816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.898842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.898974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.899000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.899158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.899186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.899347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.899373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.899503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.899528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.899634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.899660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.899763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.899788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.899890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.899915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.900046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.900071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.900224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.900273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.900402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.900427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.900529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.900554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.900705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.900733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.900852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.900877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.900986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.901011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.901170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.901196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.901377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.901402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.901509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.901534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.901633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.901658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.901793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.901820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.901998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.902026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.902170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.902198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.902383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.902409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.902518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.902559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.902710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.902738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-24 20:52:32.902896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.424 [2024-07-24 20:52:32.902921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.903097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.903125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.903295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.903323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.903502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.903528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.903657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.903707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.903901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.903926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.904081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.904106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.904281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.904309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.904463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.904488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.904615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.904641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.904779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.904823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.904939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.904967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.905093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.905134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.905259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.905302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.905411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.905436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.905576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.905601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.905732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.905757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.905892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.905917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.906097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.906122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.906254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.906280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.906418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.906443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.906546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.906571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.906699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.906724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.906848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.906876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.907021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.907045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.907176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.907218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.907368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.907394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.907500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.907525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.907663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.907688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.907800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.907825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.907988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.908013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.908118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.908169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.908279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.908308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.425 [2024-07-24 20:52:32.908441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.425 [2024-07-24 20:52:32.908466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.425 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.908569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.908594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.908760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.908788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.908926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.908950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.909075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.909099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.909203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.909228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.909370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.909395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.909524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.909549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.909664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.909689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.909812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.909837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.909938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.909963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.910148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.910175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.910339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.910364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.910508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.910536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.910650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.910679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.910819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.910846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.910979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.911005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.911174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.911202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.911391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.911417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.911569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.911597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.911738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.911765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.911880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.911905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.912006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.912031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.912144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.912172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.912353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.912378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.912518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.912560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.912680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.912708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.912859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.912884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.912994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.913021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.913157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.913182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.913315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.913341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.913501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.913541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.913716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.913743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.913894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.913919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.914050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.914091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.914281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.914307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.914439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.914464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.914575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.914600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.426 [2024-07-24 20:52:32.914729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.426 [2024-07-24 20:52:32.914757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.426 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.914911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.914941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.915052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.915077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.915180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.915205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.915343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.915369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.915548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.915576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.915767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.915792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.915925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.915950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.916061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.916086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.916274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.916303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.916454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.916479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.916585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.916611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.916803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.916831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.916946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.916971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.917114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.917140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.917303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.917331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.917512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.917538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.917639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.917665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.917786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.917815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.917976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.918001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.918142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.918167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.918321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.918349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.918529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.918554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.918701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.918728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.918899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.918927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.919074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.919099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.919210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.919235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.919399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.919427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.919597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.919623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.919787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.919829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.919972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.920000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.920107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.920134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.920268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.920293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.920425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.920450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.920684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.920734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.920882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.920911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.921060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.427 [2024-07-24 20:52:32.921086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.427 qpair failed and we were unable to recover it. 00:24:37.427 [2024-07-24 20:52:32.921250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.921292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.921437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.921465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.921645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.921670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.921779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.921805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.921967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.922010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.922164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.922193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.922326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.922352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.922491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.922516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.922642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.922666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.922826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.922855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.922979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.923007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.923156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.923181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.923300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.923326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.923482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.923510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.923651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.923691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.923818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.923843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.923980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.924005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.924116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.924142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.924320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.924347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.924459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.924485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.924642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.924666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.428 qpair failed and we were unable to recover it. 00:24:37.428 [2024-07-24 20:52:32.924841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.428 [2024-07-24 20:52:32.924866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.924999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.925024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.925147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.925172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.925308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.925350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.925466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.925493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.925620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.925648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.925805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.925831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.925968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.925993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.926101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.926126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.926236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.926267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.926401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.926426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.926535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.926564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.926692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.926717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.926851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.926876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.927006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.927032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.927214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.927247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.927406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.927434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.927583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.927611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.927770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.927795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.927898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.927923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.928116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.928144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.928298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.928327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.928486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.928512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.928677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.928702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.928864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.928905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.929028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.929056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.929194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.929220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.929387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.929428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.929559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.929587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.929757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.929782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.929886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.929911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.930069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.930094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.930246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.930275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.930447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.930474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.930592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.930617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.930745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.930770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.930922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.930949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.931051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.931094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.931249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.429 [2024-07-24 20:52:32.931278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.429 qpair failed and we were unable to recover it. 00:24:37.429 [2024-07-24 20:52:32.931433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.931458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.931610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.931638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.931771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.931799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.931952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.931977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.932084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.932124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.932235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.932282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.932396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.932424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.932548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.932573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.932674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.932699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.932847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.932874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.933021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.933049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.933203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.933228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.933392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.933421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.933569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.933597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.933711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.933739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.933920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.933946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.934068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.934093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.934196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.934221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.934341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.934366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.934501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.934526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.934666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.934691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.934819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.934844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.934973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.935001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.935179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.935204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.935362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.935391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.935514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.935542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.935648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.935690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.935826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.935851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.935987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.936029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.936176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.936204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.936397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.936423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.936562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.936587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.936696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.936736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.936881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.936908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.937034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.937062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.937220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.937251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.937361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.937386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.937548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.937576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.430 qpair failed and we were unable to recover it. 00:24:37.430 [2024-07-24 20:52:32.937689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.430 [2024-07-24 20:52:32.937716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.937900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.937925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.938079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.938114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.938253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.938282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.938391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.938419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.938537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.938562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.938672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.938696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.938813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.938841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.938994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.939021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.939143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.939185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.939313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.939340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.939470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.939495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.939659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.939687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.939823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.939849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.939987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.940012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.431 [2024-07-24 20:52:32.940123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.431 [2024-07-24 20:52:32.940148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.431 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.940331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.940356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.940468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.940494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.940603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.940628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.940751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.940779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.940889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.940917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.941053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.941079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.941182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.941207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.941321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.941347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.941451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.941475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.941609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.941633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.941740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.941765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.941922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.941950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.942103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.942131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.942252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.942277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.942414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.942440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.942593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.942618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.942779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.942804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.716 qpair failed and we were unable to recover it. 00:24:37.716 [2024-07-24 20:52:32.942967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.716 [2024-07-24 20:52:32.942993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.943122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.943163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.943285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.943313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.943422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.943451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.943625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.943651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.943759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.943784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.943912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.943941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.944053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.944081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.944229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.944260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.944397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.944423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.944551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.944584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.944739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.944764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.944924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.944949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.945061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.945103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.945249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.945292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.945427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.945452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.945587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.945612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.945722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.945747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.945852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.945877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.946027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.946055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.946213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.946238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.946354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.946379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.946574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.946599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.946712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.946737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.946893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.946919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.947055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.947080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.947231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.947268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.947407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.947435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.947591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.947616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.947776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.947817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.947973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.947998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.948127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.948152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.948285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.948320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.948472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.948500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.948620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.948648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.948794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.948821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.717 [2024-07-24 20:52:32.948979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.717 [2024-07-24 20:52:32.949004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.717 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.949189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.949221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.949383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.949411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.949601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.949626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.949760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.949785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.949918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.949960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.950080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.950108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.950216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.950263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.950422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.950448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.950551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.950577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.950731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.950759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.950897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.950925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.951074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.951099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.951229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.951277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.951396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.951426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.951605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.951633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.951818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.951843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.951987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.952014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.952165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.952193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.952324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.952350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.952461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.952487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.952632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.952657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.952760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.952786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.952944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.952972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.953134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.953159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.953262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.953287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.953415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.953456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.953561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.953586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.953717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.953742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.953875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.953918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.954112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.954139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.954328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.954354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.954514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.954539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.954643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.954685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.954831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.954859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.954974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.955002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.955145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.718 [2024-07-24 20:52:32.955170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.718 qpair failed and we were unable to recover it. 00:24:37.718 [2024-07-24 20:52:32.955286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.955312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.955472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.955497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.955601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.955626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.955740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.955765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.955897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.955939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.956099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.956128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.956273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.956299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.956408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.956433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.956560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.956585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.956708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.956736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.956881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.956909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.957096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.957121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.957220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.957250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.957354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.957380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.957532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.957559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.957708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.957734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.957861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.957886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.958044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.958071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.958222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.958263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.958375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.958400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.958533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.958558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.958660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.958684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.958796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.958821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.958956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.958981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.959111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.959136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.959295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.959323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.959444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.959473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.959626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.959651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.959786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.959811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.959948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.959976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.960117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.960145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.960300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.960325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.960450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.960480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.960616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.960641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.960795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.719 [2024-07-24 20:52:32.960824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.719 qpair failed and we were unable to recover it. 00:24:37.719 [2024-07-24 20:52:32.960956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.960982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.961125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.961150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.961315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.961343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.961487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.961529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.961657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.961682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.961786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.961811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.961942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.961970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.962107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.962134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.962280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.962306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.962469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.962494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.962646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.962674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.962817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.962845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.963026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.963051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.963187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.963212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.963359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.963401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.963517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.963544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.963693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.963718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.963854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.963898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.964048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.964076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.964224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.964259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.964434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.964460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.964609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.964634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.964790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.964832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.964982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.965010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.965170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.965195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.965342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.965368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.965531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.965559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.965667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.965695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.965840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.965865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.966001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.966026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.966191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.966216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.966359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.966385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.966519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.966543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.966649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.966674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.966874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.966902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.967057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.967085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.720 qpair failed and we were unable to recover it. 00:24:37.720 [2024-07-24 20:52:32.967255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.720 [2024-07-24 20:52:32.967280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.967427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.967455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.967601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.967782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.967811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.967966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.967992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.968124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.968165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.968308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.968336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.968498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.968524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.968655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.968680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.968809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.968849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.968996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.969024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.969208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.969233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.969371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.969397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.969533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.969558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.969740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.969768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.969939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.969967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.970153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.970178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.970289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.970333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.970510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.970573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.970745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.970773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.970920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.970945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.971052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.971077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.971199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.971226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.971392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.971417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.971551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.971576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.971708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.971751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.971885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.971913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.721 [2024-07-24 20:52:32.972031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.721 [2024-07-24 20:52:32.972060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.721 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.972185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.972210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.972339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.972371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.972501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.972526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.972682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.972710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.972837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.972862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.973007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.973031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.973163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.973191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.973335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.973363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.973529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.973554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.973704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.973733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.973883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.973923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.974082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.974107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.974259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.974285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.974412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.974453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.974597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.974625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.974773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.974801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.974926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.974952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.975064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.975089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.975276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.975305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.975424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.975452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.975604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.975628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.975765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.975790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.975896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.975920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.976057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.976085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.976218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.976250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.976380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.976406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.976548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.976577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.976691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.976720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.976870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.976895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.722 [2024-07-24 20:52:32.977006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.722 [2024-07-24 20:52:32.977032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.722 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.977196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.977224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.977360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.977388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.977543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.977569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.977703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.977745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.977886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.977914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.978057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.978085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.978263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.978289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.978418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.978461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.978604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.978632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.978737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.978765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.978889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.978914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.979073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.979114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.979248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.979281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.979424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.979452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.979567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.979593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.979699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.979724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.979880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.979908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.980044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.980072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.980216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.980247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.980408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.980454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.980600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.980628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.980772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.980800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.980957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.980983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.981092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.981116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.981311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.981336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.981470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.981495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.981696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.981721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.981820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.981862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.982017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.982044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.982214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.982255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.982386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.982411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.982527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.982552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.982687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.982712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.982872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.982896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.983070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.983095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.983225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.983275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.983461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.983490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.723 qpair failed and we were unable to recover it. 00:24:37.723 [2024-07-24 20:52:32.983637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.723 [2024-07-24 20:52:32.983665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.983825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.983850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.983986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.984011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.984153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.984181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.984320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.984345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.984461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.984487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.984620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.984646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.984804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.984833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.984983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.985012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.985169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.985193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.985320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.985346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.985495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.985523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.985642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.985670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.985785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.985810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.985948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.985973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.986083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.986108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.986248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.986274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.986379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.986404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.986535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.986560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.986692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.986720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.986863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.986891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.987017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.987042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.987140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.987165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.987298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.987327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.987486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.987512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.987671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.987696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.987847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.987874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.987985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.988012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.988123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.988150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.988302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.988328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.988461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.988502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.988651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.988680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.724 qpair failed and we were unable to recover it. 00:24:37.724 [2024-07-24 20:52:32.988823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.724 [2024-07-24 20:52:32.988851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.989002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.989027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.989161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.989201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.989351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.989379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.989508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.989535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.989716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.989742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.989879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.989923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.990075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.990103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.990260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.990286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.990448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.990473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.990618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.990644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.990823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.990894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.991046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.991074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.991226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.991261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.991417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.991442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.991605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.991633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.991772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.991796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.991930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.991955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.992084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.992109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.992240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.992272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.992429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.992457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.992608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.992633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.992777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.992816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.992970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.992998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.993134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.993162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.993291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.993317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.993458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.993482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.993583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.993608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.993791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.993818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.993964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.993989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.994165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.994193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.994325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.994353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.994462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.994490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.994620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.994645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.994805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.994847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.995024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.995052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.995168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.725 [2024-07-24 20:52:32.995196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.725 qpair failed and we were unable to recover it. 00:24:37.725 [2024-07-24 20:52:32.995335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.995361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.995523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.995548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.995723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.995749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.995866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.995891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.995996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.996021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.996177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.996202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.996367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.996395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.996544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.996571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.996730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.996756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.996860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.996885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.997047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.997075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.997212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.997247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.997398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.997424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.997522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.997547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.997700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.997727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.997877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.997910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.998035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.998060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.998223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.998265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.998455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.998484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.998601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.998628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.998780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.998805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.998943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.998969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.999130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.999157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.999286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.999315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.999476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.999501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.999634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.999676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.726 qpair failed and we were unable to recover it. 00:24:37.726 [2024-07-24 20:52:32.999826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.726 [2024-07-24 20:52:32.999851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.000008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.000033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.000157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.000185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.000349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.000375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.000532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.000557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.000720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.000745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.000877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.000903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.001016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.001041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.001202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.001229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.001365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.001394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.001529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.001554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.001666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.001692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.001862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.001887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.001999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.002024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.002157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.002182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.002316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.002341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.002473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.002505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.002686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.002711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.002848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.002873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.003052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.003080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.003239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.003271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.003412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.003437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.003581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.003607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.003746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.003772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.003961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.003989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.004134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.004162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.004321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.004346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.004481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.004505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.004631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.004658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.004777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.004805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.004995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.005020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.005136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.005161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.005273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.005299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.005432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.005460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.005647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.727 [2024-07-24 20:52:33.005672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.727 qpair failed and we were unable to recover it. 00:24:37.727 [2024-07-24 20:52:33.005850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.005878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.006022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.006051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.006200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.006228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.006358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.006383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.006512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.006537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.006696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.006724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.006863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.006891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.007019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.007044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.007183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.007224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.007403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.007431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.007548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.007575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.007721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.007746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.007894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.007919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.008050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.008075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.008232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.008264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.008439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.008465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.008566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.008591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.008708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.008733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.008836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.008862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.008995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.009020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.009152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.009176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.009362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.009387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.009563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.009595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.009747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.009772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.009948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.009976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.010115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.010143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.010312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.010340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.010522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.010547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.010683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.010707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.010817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.010843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.010953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.010978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.011111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.011136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.011267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.011294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.011425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.011450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.011630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.011658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.011804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.011829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.011952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.011994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.012136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.728 [2024-07-24 20:52:33.012165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.728 qpair failed and we were unable to recover it. 00:24:37.728 [2024-07-24 20:52:33.012319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.012347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.012474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.012501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.012617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.012642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.012752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.012777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.012904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.012931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.013061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.013086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.013198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.013224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.013389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.013418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.013561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.013588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.013704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.013729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.013837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.013862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.013992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.014021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.014126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.014152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.014279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.014304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.014441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.014466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.014611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.014636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.014768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.014793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.014964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.014990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.015122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.015147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.015308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.015334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.015463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.015491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.015644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.015669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.015830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.015870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.015985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.016013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.016151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.016179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.016310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.016336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.016446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.016471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.016596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.016625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.016765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.016793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.016968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.016993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.729 qpair failed and we were unable to recover it. 00:24:37.729 [2024-07-24 20:52:33.017128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.729 [2024-07-24 20:52:33.017153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.017253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.017279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.017431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.017459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.017615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.017640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.017798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.017823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.017935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.017964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.018116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.018144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.018302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.018328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.018491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.018516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.018716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.018776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.018969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.018994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.019155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.019180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.019326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.019354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.019575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.019633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.019804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.019833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.019995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.020020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.020155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.020180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.020358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.020384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.020489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.020514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.020625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.020651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.020814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.020840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.020947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.020972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.021080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.021109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.021270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.021295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.021462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.021504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.021644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.021672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.021821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.021849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.022012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.022037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.022171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.022196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.022343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.022368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.022527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.022555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.022736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.022761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.022915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.022943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.023118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.023146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.023259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.023287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.023468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.023493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.023615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.023640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.023747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.730 [2024-07-24 20:52:33.023772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.730 qpair failed and we were unable to recover it. 00:24:37.730 [2024-07-24 20:52:33.023882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.023908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.024068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.024094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.024279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.024307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.024496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.024546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.024671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.024701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.024851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.024876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.024977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.025002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.025187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.025215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.025349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.025376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.025477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.025503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.025615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.025641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.025801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.025834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.025954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.025982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.026161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.026188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.026338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.026364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.026466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.026492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.026651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.026679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.026830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.026855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.026990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.027015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.027145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.027174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.027300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.027328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.027462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.027488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.027629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.027654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.027834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.027862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.028004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.028032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.028213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.028238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.028350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.028375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.028515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.028540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.028673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.028701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.028851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.028877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.029007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.029049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.731 qpair failed and we were unable to recover it. 00:24:37.731 [2024-07-24 20:52:33.029202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.731 [2024-07-24 20:52:33.029230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.029383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.029411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.029590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.029615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.029758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.029786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.029900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.029929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.030049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.030076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.030236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.030266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.030393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.030436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.030614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.030642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.030761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.030788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.030917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.030942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.031076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.031101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.031223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.031275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.031458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.031486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.031631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.031656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.031790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.031834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.031948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.031990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.032143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.032168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.032304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.032330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.032465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.032490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.032601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.032627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.032760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.032793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.032973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.032998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.033107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.033149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.033339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.033365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.033482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.033507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.033666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.033691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.033871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.033899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.034051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.034076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.034214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.034239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.034373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.034399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.034535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.034577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.034716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.034744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.034891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.034920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.035048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.035074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.035220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.035281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.035403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.035432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.035566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.035593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.732 [2024-07-24 20:52:33.035754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.732 [2024-07-24 20:52:33.035780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.732 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.035907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.035932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.036084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.036112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.036220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.036257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.036392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.036417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.036520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.036679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.036708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.036830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.036859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.037042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.037068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.037210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.037239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.037374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.037402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.037581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.037609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.037782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.037807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.037909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.037950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.038123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.038151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.038299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.038328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.038482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.038508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.038608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.038634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.038769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.038797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.038943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.038971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.039101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.039126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.039233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.039266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.039414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.039442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.039584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.039613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.039776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.039802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.039989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.040017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.040128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.040155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.040300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.040328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.040475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.040500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.040611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.040636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.040773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.040798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.040935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.040963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.041096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.041121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.041252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.041277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.041438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.041466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.041627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.041652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.041814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.041839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.041971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.041996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.733 [2024-07-24 20:52:33.042131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.733 [2024-07-24 20:52:33.042157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.733 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.042318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.042347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.042499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.042524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.042628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.042653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.042810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.042835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.042939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.042964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.043100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.043125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.043235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.043290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.043432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.043460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.043635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.043663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.043788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.043813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.043919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.043943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.044060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.044088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.044261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.044295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.044445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.044470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.044611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.044637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.044741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.044767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.044918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.044945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.045101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.045126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.045233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.045266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.045458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.045486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.045611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.045639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.045794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.045819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.045956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.045999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.046182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.046208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.046325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.046350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.046485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.046518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.046674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.046702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.046852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.046880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.047023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.047052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.047167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.047208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.047344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.047370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.047507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.047550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.047695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.047723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.047882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.047907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.048084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.048112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.048256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.048285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.048395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.048423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.734 qpair failed and we were unable to recover it. 00:24:37.734 [2024-07-24 20:52:33.048569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.734 [2024-07-24 20:52:33.048594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.048693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.048718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.048902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.048930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.049053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.049081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.049252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.049278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.049408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.049433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.049590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.049631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.049780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.049809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.049993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.050018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.050172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.050200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.050360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.050389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.050529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.050557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.050718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.050743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.050852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.050877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.050981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.051007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.051139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.051164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.051303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.051329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.051494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.051519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.051706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.051733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.051880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.051908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.052050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.052076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.052213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.052238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.052374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.052416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.052590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.052618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.052785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.052810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.052944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.052969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.053098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.053124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.053280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.053315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.053442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.053467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.053592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.053617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.053760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.053785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.053945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.053987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.054118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.054143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.054273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.054308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.054490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.054516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.054654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.054679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.054807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.054832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.054939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.054964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.735 [2024-07-24 20:52:33.055089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.735 [2024-07-24 20:52:33.055117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.735 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.055250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.055279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.055472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.055497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.055597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.055639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.055789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.055817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.055965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.056000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.056146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.056175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.056318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.056345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.056481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.056505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.056640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.056668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.056782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.056807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.056934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.056959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.057069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.057096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.057258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.057287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.057417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.057442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.057582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.057607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.057741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.057767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.057912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.057940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.058127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.058152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.058269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.058315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.058501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.058565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.058714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.058742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.058866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.058891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.059049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.059089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.059315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.059344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.059458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.059485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.059639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.059664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.059839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.059867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.060004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.060032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.060173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.060201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.060352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.060379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.060487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.060512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.060673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.060701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.736 qpair failed and we were unable to recover it. 00:24:37.736 [2024-07-24 20:52:33.060845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.736 [2024-07-24 20:52:33.060873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.061031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.061055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.061199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.061223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.061369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.061394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.061513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.061542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.061703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.061728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.061829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.061854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.062017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.062046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.062159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.062187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.062376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.062402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.062511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.062553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.062698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.062727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.062870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.062898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.063075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.063104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.063298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.063327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.063528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.063577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.063750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.063778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.063921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.063946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.064060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.064086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.064252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.064282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.064464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.064492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.064640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.064665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.064792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.064833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.065009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.065034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.065168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.065193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.065300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.065326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.065456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.065481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.065678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.065705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.065855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.065883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.066056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.066084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.066205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.066234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.066368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.066394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.066507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.066551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.066706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.066731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.066907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.066934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.067050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.067078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.067220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.067268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.067392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.067418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.067533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.737 [2024-07-24 20:52:33.067557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.737 qpair failed and we were unable to recover it. 00:24:37.737 [2024-07-24 20:52:33.067667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.067692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.067878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.067910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.068046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.068071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.068206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.068231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.068412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.068438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.068544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.068570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.068731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.068756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.068907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.068936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.069109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.069137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.069304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.069330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.069464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.069489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.069600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.069625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.069724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.069749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.069913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.069938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.070042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.070067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.070202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.070228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.070351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.070376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.070534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.070561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.070711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.070737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.070865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.070890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.071056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.071084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.071224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.071259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.071404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.071429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.071610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.071639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.071745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.071773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.071894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.071922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.072049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.072075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.072208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.072234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.072397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.072425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.072574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.072602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.072753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.072778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.072916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.072941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.073124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.073152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.073276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.073306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.073455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.073480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.073615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.073656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.073766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.073794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.073917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.073944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.738 qpair failed and we were unable to recover it. 00:24:37.738 [2024-07-24 20:52:33.074060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.738 [2024-07-24 20:52:33.074102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.074230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.074261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.074361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.074386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.074491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.074516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.074624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.074653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.074792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.074833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.074985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.075013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.075166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.075191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.075349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.075379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.075536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.075564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.075682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.075709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.075853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.075881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.076023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.076048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.076183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.076208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.076413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.076441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.076597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.076626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.076787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.076812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.076944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.076969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.077137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.077165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.077286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.077315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.077463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.077489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.077630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.077671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.077880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.077933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.078060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.078088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.078253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.078280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.078439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.078467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.078582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.078610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.078755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.078783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.078912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.078937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.079038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.079063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.079186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.079214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.079360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.079389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.079500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.079526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.079636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.079662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.079821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.079849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.079991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.080019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.080195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.080220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.080334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.080360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.739 [2024-07-24 20:52:33.080512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.739 [2024-07-24 20:52:33.080540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.739 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.080657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.080686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.080868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.080893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.081074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.081102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.081270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.081296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.081442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.081467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.081597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.081622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.081722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.081748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.081886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.081911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.082014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.082040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.082191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.082219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.082347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.082372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.082551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.082579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.082729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.082756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.082948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.082973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.083114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.083142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.083270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.083307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.083435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.083463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.083649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.083674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.083866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.083894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.084038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.084065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.084208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.084236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.084388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.084414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.084528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.084553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.084655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.084681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.084815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.084840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.084971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.084996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.085137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.085162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.085302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.085329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.085515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.085543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.085701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.085727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.085902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.085930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.086069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.086097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.086254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.086283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.740 qpair failed and we were unable to recover it. 00:24:37.740 [2024-07-24 20:52:33.086398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.740 [2024-07-24 20:52:33.086428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.086558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.086584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.086737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.086765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.086880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.086911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.087037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.087064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.087221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.087284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.087412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.087440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.087556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.087585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.087783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.087808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.087991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.088019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.088142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.088171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.088284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.088313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.088435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.088460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.088621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.088646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.088783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.088811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.088923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.088951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.089105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.089130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.089262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.089303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.089456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.089485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.089604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.089632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.089789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.089814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.089947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.089972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.090101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.090126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.090282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.090310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.090471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.090496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.090658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.090683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.090819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.090847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.091031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.091063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.091221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.091258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.091422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.091448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.091577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.091605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.091709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.091737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.091864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.091889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.092027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.092052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.092208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.092233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.092434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.092459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.092591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.092616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.741 qpair failed and we were unable to recover it. 00:24:37.741 [2024-07-24 20:52:33.092716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.741 [2024-07-24 20:52:33.092741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.092874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.092899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.093030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.093055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.093215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.093248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.093361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.093387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.093559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.093583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.093686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.093713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.093825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.093850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.093960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.093985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.094143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.094171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.094311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.094339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.094476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.094502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.094639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.094664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.094816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.094844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.094985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.095012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.095130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.095155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.095313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.095355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.095475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.095503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.095616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.095644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.095777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.095801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.095920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.095945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.096082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.096109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.096282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.096310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.096437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.096462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.096593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.096618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.096723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.096749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.096899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.096926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.097053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.097078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.097205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.097230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.097419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.097447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.097595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.097623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.097747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.097775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.097910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.097936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.098103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.098131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.098274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.098303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.098459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.098490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.098599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.098624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.098783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.098809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.742 qpair failed and we were unable to recover it. 00:24:37.742 [2024-07-24 20:52:33.098952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.742 [2024-07-24 20:52:33.098982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.099160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.099188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.099326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.099351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.099489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.099515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.099638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.099667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.099858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.099883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.099997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.100022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.100186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.100212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.100369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.100394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.100494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.100519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.100656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.100681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.100805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.100833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.100986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.101014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.101134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.101158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.101277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.101303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.101435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.101461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.101571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.101596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.101698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.101724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.101851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.101877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.102008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.102036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.102177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.102205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.102376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.102401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.102506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.102531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.102655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.102683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.102830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.102857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.102988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.103013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.103154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.103179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.103327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.103356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.103512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.103540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.103681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.103706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.103843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.103868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.104020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.104048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.104174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.104202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.104335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.104360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.104505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.104531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.104686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.104715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.104882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.104910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.105028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.743 [2024-07-24 20:52:33.105053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.743 qpair failed and we were unable to recover it. 00:24:37.743 [2024-07-24 20:52:33.105197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.105222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.105395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.105424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.105602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.105630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.105759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.105786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.105966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.105994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.106139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.106168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.106338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.106364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.106470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.106495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.106629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.106655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.106784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.106813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.106958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.106986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.107133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.107158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.107291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.107334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.107505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.107533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.107681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.107709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.107886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.107911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.108062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.108090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.108249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.108277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.108420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.108631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.108656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.108784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.108809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.108966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.108994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.109161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.109186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.109290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.109320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.109421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.109446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.109625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.109653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.109822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.109850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.110007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.110033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.110165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.110206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.110368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.110396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.110517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.110545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.110668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.110693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.110792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.110817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.110930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.110955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.111133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.111161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.111311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.111337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.111492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.111520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.111668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.744 [2024-07-24 20:52:33.111696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.744 qpair failed and we were unable to recover it. 00:24:37.744 [2024-07-24 20:52:33.111850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.111878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.112033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.112058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.112185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.112210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.112397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.112425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.112541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.112569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.112695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.112720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.112848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.112874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.113026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.113054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.113228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.113262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.113415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.113442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.113544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.113570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.113759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.113787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.113899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.113928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.114109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.114134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.114286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.114315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.114479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.114552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.114696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.114723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.114885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.114910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.115083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.115111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.115220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.115254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.115381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.115409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.115537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.115562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.115687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.115712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.115873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.115901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.116021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.116050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.116174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.116200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.116375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.116422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.116539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.116567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.116713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.116742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.116904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.116929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.117060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.117102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.117251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.117294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.117454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.117479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.745 qpair failed and we were unable to recover it. 00:24:37.745 [2024-07-24 20:52:33.117577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.745 [2024-07-24 20:52:33.117602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.117709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.117735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.117862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.117889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.118029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.118057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.118180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.118205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.118313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.118339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.118492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.118520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.118641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.118669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.118858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.118883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.119037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.119065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.119222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.119264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.119394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.119420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.119570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.119595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.119730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.119755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.119906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.119934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.120048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.120076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.120256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.120281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.120392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.120418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.120552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.120577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.120765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.120790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.120902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.120931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.121036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.121061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.121212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.121240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.121408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.121433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.121569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.121594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.121775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.121803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.121947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.121975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.122156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.122183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.122303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.122328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.122461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.122486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.122619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.122644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.122791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.122819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.122934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.122959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.123096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.123121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.123259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.123285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.123399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.123426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.123581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.123606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.123741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.746 [2024-07-24 20:52:33.123781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.746 qpair failed and we were unable to recover it. 00:24:37.746 [2024-07-24 20:52:33.123931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.123959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.124097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.124125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.124282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.124307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.124450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.124478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.124620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.124648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.124802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.124829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.124956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.124981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.125090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.125116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.125246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.125274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.125398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.125423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.125555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.125580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.125718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.125761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.125871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.125899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.126071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.126098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.126221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.126251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.126354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.126379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.126518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.126546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.126718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.126745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.126895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.126920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.127101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.127129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.127278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.127304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.127429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.127454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.127586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.127611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.127721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.127750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.127875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.127903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.128017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.128045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.128226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.128256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.128370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.128414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.128593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.128618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.128728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.128753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.128865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.128890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.129050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.129093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.129275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.129300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.129439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.129464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.129593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.747 [2024-07-24 20:52:33.129618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.747 qpair failed and we were unable to recover it. 00:24:37.747 [2024-07-24 20:52:33.129727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.129752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.129910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.129937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.130123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.130148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.130280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.130306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.130407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.130432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.130592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.130619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.130738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.130767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.130931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.130956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.131089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.131132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.131268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.131310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.131443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.131469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.131606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.131632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.131740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.131764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.131916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.131944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.132090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.132118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.132237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.132272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.132403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.132444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.132591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.132618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.132776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.132801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.132940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.132965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.133104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.133146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.133289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.133317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.133455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.133483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.133619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.133644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.133768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.133793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.133900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.133926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.134054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.134078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.134237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.134269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.134417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.134445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.134566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.134594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.134744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.134771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.134934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.134959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.135085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.135125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.135236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.135285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.135439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.135467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.135621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.135646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.135781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.135823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.135971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.136000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.136143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.748 [2024-07-24 20:52:33.136171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.748 qpair failed and we were unable to recover it. 00:24:37.748 [2024-07-24 20:52:33.136304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.136330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.136434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.136459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.136565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.136590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.136731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.136757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.136933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.136960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.137115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.137143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.137300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.137326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.137463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.137489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.137624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.137649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.137787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.137830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.137972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.138000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.138189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.138230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.138364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.138390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.138501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.138527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.138680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.138709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.138876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.138901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.139002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.139028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.139159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.139189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.139404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.139430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.139560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.139586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.139744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.139769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.139902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.139945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.140105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.140131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.140237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.140278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.140414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.140440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.140553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.140579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.140722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.140748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.140879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.140907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.141067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.141092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.141200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.141226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.141400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.141429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.141587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.141615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.141794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.141820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.141927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.141954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.142143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.142172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.142318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.142347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.142507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.142533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.142663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.142689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.142849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.142877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.143022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.143051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.143207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.749 [2024-07-24 20:52:33.143233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.749 qpair failed and we were unable to recover it. 00:24:37.749 [2024-07-24 20:52:33.143356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.143381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.143513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.143538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.143648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.143673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.143806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.143836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.143937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.143962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.144062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.144088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.144218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.144252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.144438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.144463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.144609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.144634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.144739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.144764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.144876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.144901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.145007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.145032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.145131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.145157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.145322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.145348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.145472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.145497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.145696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.145722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.145834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.145860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.145999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.146024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.146155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.146184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.146313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.146338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.146480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.146523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.146659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.146734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.146881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.146908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.147029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.147054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.147194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.147219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.147355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.147383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.147522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.147550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.147695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.147719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.147818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.147844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.147999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.148028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.148148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.148175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.750 [2024-07-24 20:52:33.148323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.750 [2024-07-24 20:52:33.148349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.750 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.148480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.148506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.148643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.148669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.148819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.148847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.149000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.149025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.149149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.149174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.149306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.149335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.149481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.149509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.149688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.149713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.149889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.149917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.150066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.150107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.150217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.150255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.150402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.150427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.150560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.150589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.150748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.150776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.150931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.150958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.151103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.151128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.151231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.151263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.151399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.151427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.151552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.151579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.151699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.151725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.151851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.151876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.152001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.152029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.152177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.152206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.152405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.152431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.152556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.152584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.152732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.152760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.152913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.152942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.153117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.153143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.153298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.153327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.153504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.153532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.153691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.153717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.153854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.153879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.154011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.154054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.154211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.154236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.154400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.154425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.154560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.154585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.154721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.154747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.154924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.154952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.155086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.155111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.155224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.155263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.155376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.155401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.155531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.155556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.155691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.155720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.155860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.155885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.751 qpair failed and we were unable to recover it. 00:24:37.751 [2024-07-24 20:52:33.156020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.751 [2024-07-24 20:52:33.156045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.156168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.156193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.156356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.156382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.156519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.156544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.156677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.156720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.156839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.156867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.157012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.157040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.157192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.157218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.157333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.157359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.157491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.157520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.157667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.157695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.157845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.157870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.157982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.158008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.158146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.158171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.158324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.158349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.158479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.158505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.158634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.158660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.158764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.158790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.158929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.158957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.159086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.159111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.159256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.159289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.159456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.159481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.159597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.159623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.159757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.159782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.159911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.159936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.160106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.160131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.160268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.160312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.160439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.160465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.160643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.160671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.160790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.160818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.160964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.160992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.161138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.161163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.161273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.161298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.161422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.161449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.161579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.161607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.161755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.161780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.161881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.161912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.162109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.162137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.162311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.162340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.162462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.162487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.162646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.162671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.752 [2024-07-24 20:52:33.162820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.752 [2024-07-24 20:52:33.162848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.752 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.162993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.163020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.163148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.163173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.163298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.163323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.163473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.163501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.163646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.163674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.163814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.163839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.163979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.164004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.164164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.164192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.164363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.164389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.164491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.164516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.164622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.164648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.164779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.164806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.164955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.164983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.165108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.165133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.165235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.165265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.165409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.165451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.165560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.165585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.165719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.165744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.165875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.165900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.166060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.166088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.166206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.166234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.166423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.166449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.166605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.166633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.166817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.166842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.166978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.167003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.167146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.167171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.167351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.167380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.167525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.167553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.167664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.167692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.167840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.167864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.167971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.167996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.168193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.168218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.168363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.168388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.168550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.168575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.168725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.168753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.168880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.168908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.169055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.169083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.169289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.169331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.169439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.169464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.169576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.169601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.169720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.169748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.169878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.169903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.170039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.170064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.170210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.753 [2024-07-24 20:52:33.170235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.753 qpair failed and we were unable to recover it. 00:24:37.753 [2024-07-24 20:52:33.170376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.170401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.170532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.170557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.170690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.170715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.170873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.170901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.171045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.171073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.171238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.171269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.171385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.171410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.171569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.171594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.171731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.171759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.171883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.171909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.172009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.172034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.172191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.172219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.172390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.172415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.172549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.172574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.172706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.172731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.172828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.172853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.172991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.173016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.173148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.173173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.173279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.173326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.173502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.173530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.173678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.173707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.173831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.173856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.173990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.174014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.174117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.174142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.174303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.174331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.174514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.174539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.174688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.174717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.174836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.174864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.175032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.175060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.175185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.175211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.175389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.175418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.175582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.175607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.175746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.175772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.175887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.175912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.754 qpair failed and we were unable to recover it. 00:24:37.754 [2024-07-24 20:52:33.176012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.754 [2024-07-24 20:52:33.176055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.176191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.176219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.176355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.176380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.176511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.176536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.176647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.176672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.176773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.176798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.176946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.176974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.177146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.177171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.177268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.177294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.177405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.177431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.177564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.177589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.177724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.177749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.177916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.177945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.178068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.178095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.178269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.178297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.178422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.178447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.178578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.178603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.178730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.178755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.178891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.178916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.179054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.179079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.179205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.179230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.179340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.179365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.179523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.179548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.179717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.179742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.179876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.179901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.180031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.180062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.180195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.180224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.180386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.180411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.180514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.180539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.180671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.180698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.180824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.180852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.181002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.181027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.181134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.181159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.181260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.181285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.181440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.181467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.181593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.181617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.181743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.181768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.181897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.181937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.182076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.182101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.182302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.182328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.182434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.182459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.182581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.182608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.182754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.182782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.182934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.182959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.755 [2024-07-24 20:52:33.183068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.755 [2024-07-24 20:52:33.183093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.755 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.183198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.183223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.183391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.183416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.183514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.183540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.183676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.183701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.183832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.183857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.184037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.184065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.184207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.184232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.184421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.184453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.184565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.184593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.184729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.184757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.184917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.184942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.185046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.185071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.185176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.185201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.185309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.185335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.185436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.185461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.185577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.185601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.185748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.185775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.185914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.185942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.186100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.186125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.186264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.186290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.186418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.186443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.186578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.186621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.186784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.186811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.186919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.186944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.187101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.187129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.187288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.187319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.187456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.187484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.187622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.187648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.187799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.187827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.187972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.188001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.188145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.188170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.188309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.188354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.188561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.188618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.188901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.188954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.189134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.189163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.189281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.189323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.189471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.189501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.189708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.189759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.189915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.189940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.190090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.190118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.756 qpair failed and we were unable to recover it. 00:24:37.756 [2024-07-24 20:52:33.190293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.756 [2024-07-24 20:52:33.190321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.190462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.190490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.190653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.190678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.190782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.190807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.190971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.190999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.191149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.191191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.191304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.191330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.191437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.191462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.191646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.191674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.191813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.191877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.192002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.192027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.192166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.192191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.192352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.192381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.192524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.192552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.192696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.192720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.192854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.192879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.193008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.193036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.193179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.193207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.193397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.193422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.193552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.193576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.193688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.193713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.193846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.193871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.193980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.194004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.194105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.194130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.194230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.194263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.194452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.194480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.194600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.194625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.194727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.194752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.194934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.194962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.195130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.195158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.195308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.195334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.195437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.195462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.195586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.195614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.195760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.195789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.195950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.195975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.196074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.196103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.196293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.196321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.196442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.196470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.196617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.196642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.196815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.196842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.196955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.196983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.197133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.197160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.197285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.197311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.197415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.197440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.197617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.757 [2024-07-24 20:52:33.197644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.757 qpair failed and we were unable to recover it. 00:24:37.757 [2024-07-24 20:52:33.197883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.197908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.198041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.198067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.198202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.198251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.198383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.198411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.198561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.198590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.198746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.198772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.198915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.198940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.199088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.199113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.199250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.199279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.199431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.199456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.199592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.199617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.199717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.199742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.199893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.199920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.200099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.200126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.200286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.200311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.200416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.200440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.200598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.200623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.200727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.200753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.200891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.200915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.201026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.201051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.201166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.201190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.201326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.201351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.201460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.201500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.201652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.201679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.201957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.202009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.202159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.202186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.202339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.202367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.202514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.202541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.202667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.202694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.202821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.202846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.202983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.203007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.203130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.203159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.203312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.203340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.203487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.203511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.203673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.203699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.203830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.758 [2024-07-24 20:52:33.203855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.758 qpair failed and we were unable to recover it. 00:24:37.758 [2024-07-24 20:52:33.204050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.204075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.204204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.204229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.204373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.204397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.204541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.204568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.204688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.204717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.204899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.204924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.205062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.205087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.205228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.205260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.205419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.205462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.205625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.205651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.205835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.205863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.206008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.206036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.206182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.206210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.206341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.206367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.206504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.206529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.206690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.206718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.206869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.206897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.207076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.207100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.207263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.207292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.207417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.207444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.207582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.207609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.207728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.207753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.207858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.207886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.208023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.208051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.208196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.208223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.208379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.208404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.208512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.208552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.208674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.208701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.208840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.208867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.209019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.209044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.209148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.209173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.209316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.209341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.209497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.209537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.209686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.209710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.209847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.209887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.210030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.210057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.210201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.210247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.210385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.210409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.210543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.210567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.210733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.210757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.210892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.759 [2024-07-24 20:52:33.210918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.759 qpair failed and we were unable to recover it. 00:24:37.759 [2024-07-24 20:52:33.211054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.211079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.211261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.211288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.211432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.211460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.211608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.211636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.211784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.211808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.211912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.211936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.212088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.212115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.212265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.212292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.212448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.212473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.212581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.212606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.212705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.212746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.212896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.212924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.213102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.213127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.213263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.213306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.213482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.213510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.213651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.213679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.213854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.213878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.214016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.214041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.214166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.214193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.214347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.214375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.214528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.214553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.214679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.214704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.214875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.214902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.215021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.215046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.215183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.215208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.215355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.215397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.215545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.215572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.215717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.215745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.215892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.215918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.216067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.216094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.216213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.216240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.216424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.216452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.216604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.216629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.216764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.216788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.216947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.216987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.217105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.217134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.217299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.217325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.217493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.217534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.217764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.217814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.217965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.217993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.218147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.218172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.218303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.218344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.218520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.218548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.218665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.760 [2024-07-24 20:52:33.218692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.760 qpair failed and we were unable to recover it. 00:24:37.760 [2024-07-24 20:52:33.218850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.218875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.219017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.219041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.219177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.219201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.219317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.219342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.219472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.219497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.219603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.219632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.219795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.219823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.219977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.220004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.220154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.220179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.220331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.220375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.220534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.220558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.220716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.220741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.220935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.220959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.221114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.221142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.221316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.221345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.221496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.221523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.221675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.221699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.221804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.221829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.221958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.221982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.222149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.222177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.222360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.222386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.222545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.222570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.222711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.222734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.222908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.222933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.223069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.223094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.223192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.223217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.223386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.223415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.223557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.223585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.223762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.223786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.223958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.223985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.224134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.224162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.224311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.224339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.224462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.224488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.224603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.224628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.224769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.224794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.224973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.225001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.225166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.225191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.225333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.225358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.225533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.225558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.225692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.225717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.225852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.225877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.225982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.226022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.226164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.226192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.761 qpair failed and we were unable to recover it. 00:24:37.761 [2024-07-24 20:52:33.226351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.761 [2024-07-24 20:52:33.226377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.226487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.226512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.226690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.226718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.226865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.226897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.227044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.227072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.227269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.227295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.227472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.227500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.227641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.227668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.227806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.227834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.227991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.228016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.228116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.228277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.228305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.228478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.228506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.228667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.228691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.228847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.228888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.229038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.229065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.229181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.229221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.229368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.229393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.229495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.229521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.229622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.229646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.229798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.229825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.229984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.230008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.230144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.230168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.230317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.230345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.230487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.230514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.230670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.230694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.230801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.230826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.230961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.230985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.231149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.231177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.231338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.231364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.231490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.231519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.231686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.231713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.231827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.231856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.232035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.232060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.232190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.232215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.232357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.232382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.232493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.762 [2024-07-24 20:52:33.232518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.762 qpair failed and we were unable to recover it. 00:24:37.762 [2024-07-24 20:52:33.232646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.232671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.232777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.232802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.232958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.232986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.233124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.233151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.233274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.233300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.233439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.233464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.233597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.233625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.233786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.233811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.233949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.233973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.234079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.234103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.234291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.234320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.234462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.234489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.234645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.234669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.234797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.234840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.234961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.234988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.235144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.235168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.235328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.235353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.235453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.235492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.235639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.235667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.235837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.235865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.236021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.236045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.236186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.236227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.236349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.236379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.236491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.236519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.236656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.236681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.236818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.236843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.237003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.763 [2024-07-24 20:52:33.237031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.763 qpair failed and we were unable to recover it. 00:24:37.763 [2024-07-24 20:52:33.237173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.237201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.237337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.237361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.237470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.237495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.237644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.237673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.237815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.237842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.237991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.238016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.238159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.238184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.238319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.238365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.238544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.238572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.238689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.238714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.238844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.238868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.239013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.239041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.239210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.239238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.239407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.239431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.239563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.239587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.239745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.239773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.239920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.239947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.240101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.240125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.240259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.240285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.240389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.240414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.240529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.240554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.240695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.240719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.240846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.240870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.241007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.241032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.241171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.241195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.241338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.241364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.241476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.241501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.241652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.241680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.241819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.241847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.242027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.242052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.242227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.242261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.242388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.242412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.242566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.242594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.242743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.242767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.242904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.242948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.764 [2024-07-24 20:52:33.243089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.764 [2024-07-24 20:52:33.243118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.764 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.243297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.243322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.243458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.243483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.243643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.243685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.243805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.243832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.243979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.244006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.244130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.244155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.244287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.244312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.244509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.244534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.244664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.244689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.244826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.244850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.244992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.245016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.245174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.245199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.245362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.245387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.245520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.245545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.245679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.245723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.245868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.245895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.246040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.246067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.246196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.246222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.246364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.246389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.246571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.246596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.246753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.246778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.246884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.246908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.247015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.247040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.247157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.247185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.247299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.247340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.247490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.247514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.247624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.247649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.247796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.247824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.247960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.247988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.248140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.248168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.248305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.248331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.248441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.248465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.248590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.248618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.248799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.248824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.248957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.248982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.249111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.249136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.765 [2024-07-24 20:52:33.249285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.765 [2024-07-24 20:52:33.249314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.765 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.249468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.249493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.249633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.249658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.249783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.249815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.249919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.249943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.250044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.250070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.250216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.250268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.250391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.250420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.250592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.250620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.250794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.250818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.250963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.250990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.251113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.251142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.251304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.251329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.251484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.251509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.251684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.251712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.251823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.251851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.251962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.251991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.252145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.252171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.252281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.252307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.252431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.252456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.252637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.252665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.252792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.252817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.252925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.252950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.253078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.253105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.253255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.253284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.253446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.253471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.253604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.253629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.253776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.253804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.253946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.253974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.254133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.254158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.254273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.254299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.254413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.254438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.254555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.254582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.254738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.254764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.254881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.254906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.255015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.255040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.255194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.766 [2024-07-24 20:52:33.255222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.766 qpair failed and we were unable to recover it. 00:24:37.766 [2024-07-24 20:52:33.255348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.767 [2024-07-24 20:52:33.255373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.767 qpair failed and we were unable to recover it. 00:24:37.767 [2024-07-24 20:52:33.255479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.767 [2024-07-24 20:52:33.255504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.767 qpair failed and we were unable to recover it. 00:24:37.767 [2024-07-24 20:52:33.255650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.767 [2024-07-24 20:52:33.255678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.767 qpair failed and we were unable to recover it. 00:24:37.767 [2024-07-24 20:52:33.255837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.767 [2024-07-24 20:52:33.255862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.767 qpair failed and we were unable to recover it. 00:24:37.767 [2024-07-24 20:52:33.255962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.767 [2024-07-24 20:52:33.255987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.767 qpair failed and we were unable to recover it. 00:24:37.767 [2024-07-24 20:52:33.256149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.767 [2024-07-24 20:52:33.256192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.767 qpair failed and we were unable to recover it. 00:24:37.767 [2024-07-24 20:52:33.256335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.767 [2024-07-24 20:52:33.256360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:37.767 qpair failed and we were unable to recover it. 00:24:37.767 [2024-07-24 20:52:33.256495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.256535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.256664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.256690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.256793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.256818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.256922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.256947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.257059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.257084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.257193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.257218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.257329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.257355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.257479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.257503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.257625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.257653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.257779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.257805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.257911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.257937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.258078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.258106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.258222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.258259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.258392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.258417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.258548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.258573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.258735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.258762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.258910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.258938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.259058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.259083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.259182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.259207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.259338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.259366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.259470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.259498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.259647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.259672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.259775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.259815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.259969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.259998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.260110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.260139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.260330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.260356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.042 [2024-07-24 20:52:33.260464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.042 [2024-07-24 20:52:33.260489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.042 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.260620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.260649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.260776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.260804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.260939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.260964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.261123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.261165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.261306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.261335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.261485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.261513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.261667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.261692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.261820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.261846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.261994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.262022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.262141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.262169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.262297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.262323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.262437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.262462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.262592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.262618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.262804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.262832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.262989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.263015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.263151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.263177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.263336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.263361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.263460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.263485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.263622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.263648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.263781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.263822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.263985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.264010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.264129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.264157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.264269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.264295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.264443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.264469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.264654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.264678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.264840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.264883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.265034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.265059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.265226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.265276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.265399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.265428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.265577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.265605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.265731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.265756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.265891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.265916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.266055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.266080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.266253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.266282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.266428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.266453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.266598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.266623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.266826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.266851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.043 [2024-07-24 20:52:33.266986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.043 [2024-07-24 20:52:33.267012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.043 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.267137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.267162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.267278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.267442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.267467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.267628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.267657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.267804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.267829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.267931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.267956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.268103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.268131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.268272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.268301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.268423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.268448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.268554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.268579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.268724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.268752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.268907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.268932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.269041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.269066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.269204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.269229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.269348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.269373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.269531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.269559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.269708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.269733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.269898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.269924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.270084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.270126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.270272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.270301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.270448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.270474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.270605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.270631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.270776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.270801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.270934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.270959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.271143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.271171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.271326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.271352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.271487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.271512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.271654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.271679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.271790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.271815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.271947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.271972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.272102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.272131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.272237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.272281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.272385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.272410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.272565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.272590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.272723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.272751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.272887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.272915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.273071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.273096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.044 [2024-07-24 20:52:33.273232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.044 [2024-07-24 20:52:33.273281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.044 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.273430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.273459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.273568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.273610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.273737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.273762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.273936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.273964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.274111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.274138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.274251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.274279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.274407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.274432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.274562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.274588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.274740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.274768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.274933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.274958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.275091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.275116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.275220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.275250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.275406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.275434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.275574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.275602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.275784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.275809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.275909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.275934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.276079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.276106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.276270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.276298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.276447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.276472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.276601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.276627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.276771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.276800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.276974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.277002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.277165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.277190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.277320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.277362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.277508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.277536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.277677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.277704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.277850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.277875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.278050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.278078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.278217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.278251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.278402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.278430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.278583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.278608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.278738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.278763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.278898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.278926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.279033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.279065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.279233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.279265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.279374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.279415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.279563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.279590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.045 [2024-07-24 20:52:33.279732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.045 [2024-07-24 20:52:33.279760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.045 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.279910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.279935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.280108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.280135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.280261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.280289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.280434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.280462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.280620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.280645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.280776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.280817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.280961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.280989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.281132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.281160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.281309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.281334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.281453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.281478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.281584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.281609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.281772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.281797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.281924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.281948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.282060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.282085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.282264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.282290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.282426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.282451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.282594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.282619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.282723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.282764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.282932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.282960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.283105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.283133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.283289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.283316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.283457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.283482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.283618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.283664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.283770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.283799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.283933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.283958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.284063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.284089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.284254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.284282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.284465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.284491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.284625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.284651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.284762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.284787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.284927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.046 [2024-07-24 20:52:33.284952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.046 qpair failed and we were unable to recover it. 00:24:38.046 [2024-07-24 20:52:33.285099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.285127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.285278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.285304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.285464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.285489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.285592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.285619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.285791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.285819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.285947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.285972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.286125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.286150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.286302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.286331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.286478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.286507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.286663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.286689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.286793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.286818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.286996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.287024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.287197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.287224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.287390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.287415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.287544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.287584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.287733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.287761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.287912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.287940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.288062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.288087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.288194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.288219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.288369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.288394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.288550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.288578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.288735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.288760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.288888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.288929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.289080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.289108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.289234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.289267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.289368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.289393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.289496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.289520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.289678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.289704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.289889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.289917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.290062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.290087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.290223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.290254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.290386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.290411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.290574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.290606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.290762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.290787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.290915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.290940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.291102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.291130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.291238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.291272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.291406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.047 [2024-07-24 20:52:33.291432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.047 qpair failed and we were unable to recover it. 00:24:38.047 [2024-07-24 20:52:33.291566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.291591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.291747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.291775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.291920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.291948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.292105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.292130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.292262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.292307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.292448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.292476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.292621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.292649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.292767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.292792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.292912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.292937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.293094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.293122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.293231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.293268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.293418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.293443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.293582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.293607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.293733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.293758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.293864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.293888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.294015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.294040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.294152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.294177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.294339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.294368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.294501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.294529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.294674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.294699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.294833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.294858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.294997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.295030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.295189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.295216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.295351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.295378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.295488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.295514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.295652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.295680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.295853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.295881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.296028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.296053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.296182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.296226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.296419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.296444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.296572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.296597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.296794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.296819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.296996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.297024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.297142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.297170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.297342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.297371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.297511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.297536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.297694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.297719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.297899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.297924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.048 qpair failed and we were unable to recover it. 00:24:38.048 [2024-07-24 20:52:33.298026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.048 [2024-07-24 20:52:33.298051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.298212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.298240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.298374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.298400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.298579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.298614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.298760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.298788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.298945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.298970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.299119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.299147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.299296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.299324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.299466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.299494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.299647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.299672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.299805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.299848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.299999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.300027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.300172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.300200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.300342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.300369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.300508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.300533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.300644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.300668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.300805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.300830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.300945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.300971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.301145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.301172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.301385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.301415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.301571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.301601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.301788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.301813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.301929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.301954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.302115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.302140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.302363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.302396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.302578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.302603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.302742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.302767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.302875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.302901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.303014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.303040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.303165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.303190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.303323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.303366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.303528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.303556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.303744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.303769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.303903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.303928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.304065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.304090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.304252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.304280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.304445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.304470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.304610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.304635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.049 [2024-07-24 20:52:33.304824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.049 [2024-07-24 20:52:33.304852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.049 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.304960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.304987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.305124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.305152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.305331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.305356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.305468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.305494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.305662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.305690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.305800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.305828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.305986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.306011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.306174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.306215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.306377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.306405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.306547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.306576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.306729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.306754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.306899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.306925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.307069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.307101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.307253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.307281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.307463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.307488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.307598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.307623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.307731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.307756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.307889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.307916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.308061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.308085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.308199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.308254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.308397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.308426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.308564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.308591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.308741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.308765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.308864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.308889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.309062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.309087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.309218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.309250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.309392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.309418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.309551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.309576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.309719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.309743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.309872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.309897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.310064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.310089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.310189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.310215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.310361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.310389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.310530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.310558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.310711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.310736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.310876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.310900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.311033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.311057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.311197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-07-24 20:52:33.311225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.050 qpair failed and we were unable to recover it. 00:24:38.050 [2024-07-24 20:52:33.311361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.311386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.311512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.311537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.311668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.311697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.311838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.311866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.312062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.312087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.312217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.312268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.312381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.312409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.312553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.312581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.312705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.312731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.312824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.312849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.312976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.313004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.313162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.313187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.313287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.313313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.313423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.313448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.313593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.313620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.313767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.313799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.313946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.313972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.314096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.314121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.314247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.314276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.314404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.314432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.314569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.314595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.314724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.314749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.314895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.314923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.315092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.315120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.315269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.315295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.315453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.315480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.315682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.315733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.315907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.315935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.316092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.316117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.316253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.316296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.316410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.316438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.316571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.316597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.051 [2024-07-24 20:52:33.316743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-07-24 20:52:33.316768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.051 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.316902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.316927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.317096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.317121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.317252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.317278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.317405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.317430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.317526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.317551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.317666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.317691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.317816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.317844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.317975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.318000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.318156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.318198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.318349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.318377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.318534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.318563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.318719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.318745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.318847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.318872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.318986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.319012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.319128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.319153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.319284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.319309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.319414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.319440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.319546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.319571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.319696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.319724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.319852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.319877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.319983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.320008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.320199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.320227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.320400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.320425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.320543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.320568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.320704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.320729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.320845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.320870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.321001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.321029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.321173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.321198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.321339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.321365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.321546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.321571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.321702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.321726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.321864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.321889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.322020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.322045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.322146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.322171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.322311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.322337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.322468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.322494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.322593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.322618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.322821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-07-24 20:52:33.322847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.052 qpair failed and we were unable to recover it. 00:24:38.052 [2024-07-24 20:52:33.323015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.323040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.323154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.323178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.323293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.323318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.323452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.323480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.323622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.323650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.323783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.323809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.323935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.323960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.324117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.324144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.324294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.324319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.324455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.324480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.324589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.324614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.324751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.324776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.324940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.324969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.325090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.325118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.325271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.325314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.325421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.325446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.325606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.325634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.325791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.325816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.325944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.325969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.326090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.326118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.326267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.326295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.326442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.326467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.326601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.326626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.326767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.326792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.326933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.326960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.327083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.327108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.327229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.327261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.327368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.327394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.327555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.327580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.327710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.327735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.327841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.327866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.328017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.328045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.328207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.328232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.328381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.328406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.328523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.328548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.328748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.328773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.328907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.328931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.329065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.329090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.053 qpair failed and we were unable to recover it. 00:24:38.053 [2024-07-24 20:52:33.329236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.053 [2024-07-24 20:52:33.329267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.329388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.329416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.329543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.329571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.329700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.329725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.329857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.329882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.330038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.330067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.330192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.330220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.330362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.330387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.330518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.330543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.330723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.330751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.330898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.330927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.331135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.331163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.331297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.331323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.331456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.331482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.331612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.331640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.331802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.331831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.331969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.332013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.332154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.332183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.332313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.332341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.332517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.332543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.332674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.332714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.332853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.332881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.332997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.333024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.333192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.333216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.333352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.333378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.333506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.333534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.333676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.333704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.333884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.333910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.334086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.334114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.334228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.334264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.334382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.334409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.334564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.334589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.334769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.334797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.334944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.334973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.335084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.335112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.335239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.335270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.335380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.335405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.335536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.335561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.335665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.054 [2024-07-24 20:52:33.335689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.054 qpair failed and we were unable to recover it. 00:24:38.054 [2024-07-24 20:52:33.335797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.335821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.335928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.335952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.336103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.336131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.336270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.336303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.336436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.336461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.336568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.336593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.336711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.336739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.336854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.336882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.337003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.337028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.337155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.337180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.337329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.337355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.337488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.337531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.337710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.337735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.337834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.337859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.337974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.338000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.338105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.338130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.338267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.338293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.338448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.338475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.338614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.338642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.338750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.338778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.338924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.338949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.339057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.339082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.339210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.339237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.339383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.339408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.339507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.339532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.339661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.339687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.339844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.339871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.340042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.340070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.340218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.340261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.340437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.340464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.340636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.340664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.340837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.340865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.341020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.341046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.341159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.341185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.341296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.341322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.341480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.341508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.341641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.341666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.341762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.055 [2024-07-24 20:52:33.341787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.055 qpair failed and we were unable to recover it. 00:24:38.055 [2024-07-24 20:52:33.341888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.341914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.342110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.342135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.342236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.342268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.342379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.342405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.342595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.342620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.342749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.342774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.342902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.342931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.343039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.343064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.343255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.343283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.343457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.343485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.343629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.343654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.343758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.343785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.343888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.343914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.344094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.344122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.344276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.344302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.344400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.344425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.344562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.344590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.344728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.344756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.344879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.344904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.345042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.345067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.345197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.345226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.345398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.345427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.345607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.345632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.345735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.345775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.345921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.345948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.346081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.346109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.346320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.346347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.346473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.346498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.346618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.346646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.346796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.346823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.347000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.347025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.347188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.347215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.347361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.347390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.056 [2024-07-24 20:52:33.347498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.056 [2024-07-24 20:52:33.347530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.056 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.347713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.347738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.347890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.347918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.348029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.348057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.348254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.348282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.348435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.348459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.348590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.348615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.348751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.348775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.348901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.348929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.349091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.349116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.349277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.349318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.349466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.349494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.349632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.349660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.349811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.349836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.349969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.349994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.350158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.350187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.350335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.350363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.350488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.350513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.350646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.350670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.350776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.350800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.350908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.350934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.351046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.351071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.351184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.351209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.351349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.351374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.351524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.351552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.351673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.351699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.351838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.351863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.352041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.352065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.352208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.352234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.352383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.352408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.352594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.352621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.352773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.352799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.352961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.352986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.353084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.353109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.353212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.353238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.353373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.353401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.353547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.353574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.353700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.353725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.353856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.057 [2024-07-24 20:52:33.353881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.057 qpair failed and we were unable to recover it. 00:24:38.057 [2024-07-24 20:52:33.354035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.354063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.354179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.354207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.354388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.354417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.354546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.354571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.354705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.354732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.354867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.354894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.355069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.355094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.355202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.355227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.355396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.355423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.355593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.355620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.355799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.355824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.355995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.356023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.356197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.356225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.356383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.356411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.356567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.356593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.356701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.356726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.356845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.356870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.357016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.357043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.357161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.357202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.357345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.357371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.357477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.357502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.357613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.357639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.357746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.357771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.357878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.357903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.358038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.358066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.358210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.358237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.358410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.358436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.358541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.358565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.358715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.358742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.358889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.358921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.359070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.359094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.359206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.359255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.359402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.359429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.359536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.359564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.359690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.359715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.359847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.359872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.360035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.360060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.360160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.058 [2024-07-24 20:52:33.360185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.058 qpair failed and we were unable to recover it. 00:24:38.058 [2024-07-24 20:52:33.360322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.360348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.360521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.360549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.360668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.360696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.360840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.360867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.361034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.361059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.361239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.361272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.361389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.361418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.361530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.361558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.361706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.361731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.361905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.361932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.362040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.362068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.362212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.362239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.362396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.362421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.362559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.362584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.362708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.362735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.362908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.362936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.363057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.363082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.363216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.363248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.363394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.363435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.363590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.363619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.363737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.363762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.363855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.363880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.364035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.364063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.364207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.364235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.364385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.364410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.364540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.364565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.364703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.364730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.364872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.364899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.365045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.365069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.365178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.365203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.365314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.365340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.365449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.365474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.365577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.365606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.365742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.365767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.365893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.365918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.366056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.366083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.366202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.366228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.366403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.366445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.059 qpair failed and we were unable to recover it. 00:24:38.059 [2024-07-24 20:52:33.366589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.059 [2024-07-24 20:52:33.366617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.366735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.366762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.366908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.366933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.367067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.367092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.367193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.367218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.367380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.367406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.367542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.367567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.367664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.367689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.367793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.367819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.367934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.367959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.368111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.368139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.368301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.368327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.368455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.368480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.368632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.368660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.368798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.368823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.368926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.368953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.369114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.369139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.369286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.369314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.369447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.369473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.369610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.369653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.369770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.369799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.369924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.369952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.370113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.370138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.370273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.370317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.370474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.370499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.370633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.370658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.370823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.370848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.371001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.371028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.371136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.371164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.371319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.371345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.371456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.371481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.371620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.371662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.060 [2024-07-24 20:52:33.371843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.060 [2024-07-24 20:52:33.371868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.060 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.371998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.372023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.372154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.372179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.372386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.372416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.372550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.372577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.372694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.372719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.372852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.372878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.372992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.373017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.373183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.373208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.373348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.373374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.373470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.373495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.373629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.373654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.373810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.373838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.373963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.373991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.374141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.374165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.374262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.374288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.374399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.374424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.374574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.374601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.374782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.374807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.374914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.374955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.375098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.375126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.375237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.375288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.375425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.375450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.375581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.375606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.375721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.375749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.375910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.375938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.376120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.376145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.376284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.376309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.376415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.376441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.376605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.376633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.376784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.376813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.376950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.376975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.377102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.377127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.377276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.377322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.377420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.377444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.377554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.377578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.377707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.377735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.377874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.377902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.061 [2024-07-24 20:52:33.378074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.061 [2024-07-24 20:52:33.378099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.061 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.378204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.378229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.378418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.378447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.378561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.378588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.378768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.378793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.378941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.378969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.379144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.379171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.379313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.379341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.379492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.379517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.379650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.379690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.379943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.379994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.380151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.380179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.380338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.380364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.380468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.380493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.380599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.380624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.380770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.380798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.380924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.380949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.381055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.381080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.381215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.381240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.381377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.381402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.381512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.381538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.381652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.381677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.381837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.381865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.381991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.382018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.382149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.382174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.382283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.382309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.382468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.382496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.382625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.382653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.382804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.382829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.382960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.383002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.383116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.383144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.383333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.383359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.383458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.383483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.383614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.383639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.383798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.383826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.383947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.383975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.384104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.384129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.384249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.062 [2024-07-24 20:52:33.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.062 qpair failed and we were unable to recover it. 00:24:38.062 [2024-07-24 20:52:33.384376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.384400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.384501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.384526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.384659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.384684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.384820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.384844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.384949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.384974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.385113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.385138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.385254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.385280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.385380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.385405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.385507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.385532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.385694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.385719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.385871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.385896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.386008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.386034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.386155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.386183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.386299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.386328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.386453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.386478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.386612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.386637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.386737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.386762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.386916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.386943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.387069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.387095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.387227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.387259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.387440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.387464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.387571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.387595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.387753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.387782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.387914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.387940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.388044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.388069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.388223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.388256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.388417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.388442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.388574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.388599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.388784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.388809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.388935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.388960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.389146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.389174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.389321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.389347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.389524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.389552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.389697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.389724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.389839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.389864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.390027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.390052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.390240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.390274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.390413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.390441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.063 qpair failed and we were unable to recover it. 00:24:38.063 [2024-07-24 20:52:33.390562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.063 [2024-07-24 20:52:33.390587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.390710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.390734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.390833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.390858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.391012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.391041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.391226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.391271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.391393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.391421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.391561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.391590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.391705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.391732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.391878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.391904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.392034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.392059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.392184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.392209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.392360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.392389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.392558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.392583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.392741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.392765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.392934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.392959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.393089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.393114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.393272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.393298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.393403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.393445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.393592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.393620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.393811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.393861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.394008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.394034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.394136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.394162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.394307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.394336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.394497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.394523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.394655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.394680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.394817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.394846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.394978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.395004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.395172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.395199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.395386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.395412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.395564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.395591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.395703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.395731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.395882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.395910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.396062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.396087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.396230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.396260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.396394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.396435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.396547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.396574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.396756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.396781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.396880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.396921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.064 [2024-07-24 20:52:33.397095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.064 [2024-07-24 20:52:33.397123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.064 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.397250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.397279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.397429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.397455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.397584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.397609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.397768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.397795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.397911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.397939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.398126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.398151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.398284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.398309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.398436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.398461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.398567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.398592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.398727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.398753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.398884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.398928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.399096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.399124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.399227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.399275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.399394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.399423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.399558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.399582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.399716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.399742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.399907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.399932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.400056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.400084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.400222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.400258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.400409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.400434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.400537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.400562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.400662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.400687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.400816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.400841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.401004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.401032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.401170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.401198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.401328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.401354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.401484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.401509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.401667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.401694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.401837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.401866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.401989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.402014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.402128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.402154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.402282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.402308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.402415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.065 [2024-07-24 20:52:33.402440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.065 qpair failed and we were unable to recover it. 00:24:38.065 [2024-07-24 20:52:33.402573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.402598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.402729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.402772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.402887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.402915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.403027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.403055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.403214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.403239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.403351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.403376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.403514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.403557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.403701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.403730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.403864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.403889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.404020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.404045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.404203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.404230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.404365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.404393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.404554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.404579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.404705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.404730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.404902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.404929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.405090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.405115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.405208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.405233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.405376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.405401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.405558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.405586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.405724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.405752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.405881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.405906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.406008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.406038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.406199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.406227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.406384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.406412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.406567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.406592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.406690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.406715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.406910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.406938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.407087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.407115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.407269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.407295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.407426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.407451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.407689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.407743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.407891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.407919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.408097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.408121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.408233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.408282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.408414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.408440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.408567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.408595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.408745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.408772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.408957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.408985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.066 qpair failed and we were unable to recover it. 00:24:38.066 [2024-07-24 20:52:33.409141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.066 [2024-07-24 20:52:33.409169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.409311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.409339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.409535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.409560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.409685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.409713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.409864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.409891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.410073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.410098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.410227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.410261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.410390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.410415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.410580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.410608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.410727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.410755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.410882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.410910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.411070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.411095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.411259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.411287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.411431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.411459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.411591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.411624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.411778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.411806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.411959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.411987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.412099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.412127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.412258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.412283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.412440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.412465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.412593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.412621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.412765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.412806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.412937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.412961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.413133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.413160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.413302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.413367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.413506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.413534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.413687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.413712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.413843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.413883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.413999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.414027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.414152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.414180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.414360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.414386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.414496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.414538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.414664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.414692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.414818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.414845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.414999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.415025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.415158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.415199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.415315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.415344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.415491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.067 [2024-07-24 20:52:33.415518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.067 qpair failed and we were unable to recover it. 00:24:38.067 [2024-07-24 20:52:33.415680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.415705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.415817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.415859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.416006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.416034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.416210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.416235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.416416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.416441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.416589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.416617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.416758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.416785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.416957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.416985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.417136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.417161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.417294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.417320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.417469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.417496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.417673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.417701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.417855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.417880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.418014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.418062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.418205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.418233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.418354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.418383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.418542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.418568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.418694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.418719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.418877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.418902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.419070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.419095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.419226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.419259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.419392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.419417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.419573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.419712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.419739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.419869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.419893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.420027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.420053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.420192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.420221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.420407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.420436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.420593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.420618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.420749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.420774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.420950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.420977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.421117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.421144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.421283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.421309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.421418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.421443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.421631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.421655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.421792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.421817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.421946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.421987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.422135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.068 [2024-07-24 20:52:33.422162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.068 qpair failed and we were unable to recover it. 00:24:38.068 [2024-07-24 20:52:33.422285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.422310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.422437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.422462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.422597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.422628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.422758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.422783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.422900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.422927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.423108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.423136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.423291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.423316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.423424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.423449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.423561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.423586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.423700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.423727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.423856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.423881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.424015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.424057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.424168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.424196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.424364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.424389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.424518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.424543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.424682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.424707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.424895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.424923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.425069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.425097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.425275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.425301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.425458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.425486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.425597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.425625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.425774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.425803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.425952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.425977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.426108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.426151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.426306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.426332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.426463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.426488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.426588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.426613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.426738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.426763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.426911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.426939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.427054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.427081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.427218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.427250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.427361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.427386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.427524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.427549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.427693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.427718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.427824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.427848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.427980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.428005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.428164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.428191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.428332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.428361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.069 qpair failed and we were unable to recover it. 00:24:38.069 [2024-07-24 20:52:33.428531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.069 [2024-07-24 20:52:33.428556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.428730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.428758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.428929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.428957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.429105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.429134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.429268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.429294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.429428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.429458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.429610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.429638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.429789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.429817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.429964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.429989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.430128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.430153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.430281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.430307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.430451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.430479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.430639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.430664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.430793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.430818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.430937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.430967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.431090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.431119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.431270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.431299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.431456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.431481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.431599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.431626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.431784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.431812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.431961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.431985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.432123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.432148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.432262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.432288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.432417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.432443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.432573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.432598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.432730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.432772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.432941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.432969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.433111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.433138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.433293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.433319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.433445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.433470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.433596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.433624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.433772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.433797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.433953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.070 [2024-07-24 20:52:33.433978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.070 qpair failed and we were unable to recover it. 00:24:38.070 [2024-07-24 20:52:33.434164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.434192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.434347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.434375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.434485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.434513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.434669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.434695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.434794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.434819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.434940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.434968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.435088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.435117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.435270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.435296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.435406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.435431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.435569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.435594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.435714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.435742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.435864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.435889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.435991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.436017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.436141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.436169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.436309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.436337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.436466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.436491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.436657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.436700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.436830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.436859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.436976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.437006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.437168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.437193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.437307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.437332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.437491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.437517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.437646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.437672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.437817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.437842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.437982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.438007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.438177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.438205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.438342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.438368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.438534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.438559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.438685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.438729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.438838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.438866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.439037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.439065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.439219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.439249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.439360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.439385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.439497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.439522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.439701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.439728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.439906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.439930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.440031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.440056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.440229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.071 [2024-07-24 20:52:33.440274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.071 qpair failed and we were unable to recover it. 00:24:38.071 [2024-07-24 20:52:33.440412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.440454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.440591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.440616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.440721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.440750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.440883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.440908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.441060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.441088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.441234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.441271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.441415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.441440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.441554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.441582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.441729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.441758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.441909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.441934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.442059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.442084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.442272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.442300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.442444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.442471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.442626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.442651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.442779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.442819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.442971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.443000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.443176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.443204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.443327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.443352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.443514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.443539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.443699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.443727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.443847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.443875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.444026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.444051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.444179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.444221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.444406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.444434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.444608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.444635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.444787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.444812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.444990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.445017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.445174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.445201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.445337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.445362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.445533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.445557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.445685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.445713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.445844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.445870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.446031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.446057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.446225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.446259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.446397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.446422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.446581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.446609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.446744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.446772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.446927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.446952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.072 qpair failed and we were unable to recover it. 00:24:38.072 [2024-07-24 20:52:33.447091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.072 [2024-07-24 20:52:33.447116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.447276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.447305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.447419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.447447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.447568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.447593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.447706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.447731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.447871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.447903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.448048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.448076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.448228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.448259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.448368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.448393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.448542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.448571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.448740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.448768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.448951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.448976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.449129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.449157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.449295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.449323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.449472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.449501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.449626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.449652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.449791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.449816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.449947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.449972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.450102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.450127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.450261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.450287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.450426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.450451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.450598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.450626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.450771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.450799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.450952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.450978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.451093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.451118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.451275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.451300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.451427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.451452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.451580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.451604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.451739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.451781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.451951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.451979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.452097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.452124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.452308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.452334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.452431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.452475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.452621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.452649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.452767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.452795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.452947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.452972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.453132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.453157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.453265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.453291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.073 [2024-07-24 20:52:33.453405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.073 [2024-07-24 20:52:33.453430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.073 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.453558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.453583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.453714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.453739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.453863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.453890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.454037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.454064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.454208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.454233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.454407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.454432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.454561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.454586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.454728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.454756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.454895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.454920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.455015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.455040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.455202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.455230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.455387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.455417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.455564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.455590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.455735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.455763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.455870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.455898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.456004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.456031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.456176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.456201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.456351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.456377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.456479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.456504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.456704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.456729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.456859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.456884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.456988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.457013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.457147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.457172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.457301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.457327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.457434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.457459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.457593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.457618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.457799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.457827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.457997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.458025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.458220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.458255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.458402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.458427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.458570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.458597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.458758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.458786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.074 [2024-07-24 20:52:33.458938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.074 [2024-07-24 20:52:33.458963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.074 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.459071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.459095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.459266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.459298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.459461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.459487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.459597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.459622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.459762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.459804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.459920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.459948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.460095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.460124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.460278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.460305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.460485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.460512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.460712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.460768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.460887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.460915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.461038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.461064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.461198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.461223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.461344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.461369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.461483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.461511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.461673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.461699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.461832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.461856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.461981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.462010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.462131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.462159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.462325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.462350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.462489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.462514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.462634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.462662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.462803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.462831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.462997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.463022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.463117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.463142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.463275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.463304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.463443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.463470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.463628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.463653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.463810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.463855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.464009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.464034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.464162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.464186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.464348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.464374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.464476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.464517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.464633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.464662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.464810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.464838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.464960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.464985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.465089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.465114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.465265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.075 [2024-07-24 20:52:33.465295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.075 qpair failed and we were unable to recover it. 00:24:38.075 [2024-07-24 20:52:33.465440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.465469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.465658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.465684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.465832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.465860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.465991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.466020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.466178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.466204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.466368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.466395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.466539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.466567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.466680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.466709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.466895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.466923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.467079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.467104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.467235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.467265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.467394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.467419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.467571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.467612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.467714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.467739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.467900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.467941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.468082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.468110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.468271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.468305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.468413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.468439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.468602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.468644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.468817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.468845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.468990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.469018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.469144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.469169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.469273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.469299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.469457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.469485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.469623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.469650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.469799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.469825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.469927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.469953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.470092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.470120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.470279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.470307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.470425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.470451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.470608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.470636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.470795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.470828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.471002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.471029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.471186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.471211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.471357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.471383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.471513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.471554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.471735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.076 [2024-07-24 20:52:33.471760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.076 qpair failed and we were unable to recover it. 00:24:38.076 [2024-07-24 20:52:33.471890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.471915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.472015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.472040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.472224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.472267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.472428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.472453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.472552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.472577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.472740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.472765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.472938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.472963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.473068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.473092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.473226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.473263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.473416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.473443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.473594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.473622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.473765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.473793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.473936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.473961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.474145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.474173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.474287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.474316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.474434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.474462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.474644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.474669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.474807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.474831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.474959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.474984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.475119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.475147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.475293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.475319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.475431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.475461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.475618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.475646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.475761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.475790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.475926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.475951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.476110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.476150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.476303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.476331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.476471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.476499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.476677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.476702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.476825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.476868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.477039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.477067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.477174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.477202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.477334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.477360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.477467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.477492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.477604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.477629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.477773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.477798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.477899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.477924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.478057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.478082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.077 [2024-07-24 20:52:33.478268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.077 [2024-07-24 20:52:33.478297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.077 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.478437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.478465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.478610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.478635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.478750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.478793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.478912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.478940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.479064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.479093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.479268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.479311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.479416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.479441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.479585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.479612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.479734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.479761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.479941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.479966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.480128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.480156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.480321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.480347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.480461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.480589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.480614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.480711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.480737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.480841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.480865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.481006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.481031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.481156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.481181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.481281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.481307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.481467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.481495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.481601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.481641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.481780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.481804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.481981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.482009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.482148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.482192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.482330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.482356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.482485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.482510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.482613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.482637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.482755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.482783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.482894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.482923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.483076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.483104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.483275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.483319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.483423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.483448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.483546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.483571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.483703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.483728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.483864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.483889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.484049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.484077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.484225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.484255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.078 [2024-07-24 20:52:33.484390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.078 [2024-07-24 20:52:33.484416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.078 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.484550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.484591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.484734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.484762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.484930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.484958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.485136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.485161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.485340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.485369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.485516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.485544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.485686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.485715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.485878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.485903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.486078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.486106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.486250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.486279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.486431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.486458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.486612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.486637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.486742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.486767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.486899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.486927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.487080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.487108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.487236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.487268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.487402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.487427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.487540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.487565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.487695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.487720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.487824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.487849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.488013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.488038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.488166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.488194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.488343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.488369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.488470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.488495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.488606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.488631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.488823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.488848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.488963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.488989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.489133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.489158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.489289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.489332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.489443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.489471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.489582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.489623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.489749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.489774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.489899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.489924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.490079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.490107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.490282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.490311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.490491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.079 [2024-07-24 20:52:33.490516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.079 qpair failed and we were unable to recover it. 00:24:38.079 [2024-07-24 20:52:33.490671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.490699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.490840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.490869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.491013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.491041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.491194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.491219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.491401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.491426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.491592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.491654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.491795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.491823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.491970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.491995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.492123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.492148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.492304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.492332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.492485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.492513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.492646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.492671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.492803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.492828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.492952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.492980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.493096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.493123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.493253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.493279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.493410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.493435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.493592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.493625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.493752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.493780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.493927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.493952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.494086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.494111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.494291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.494317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.494478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.494503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.494642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.494667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.494780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.494806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.494944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.494969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.495080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.495105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.495230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.495263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.495435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.495459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.495642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.495669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.495831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.495856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.495994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.496019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.496150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.496192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.496354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.496381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.496498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.496523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.496631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.496657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.496787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.496813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.496942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.496970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.497081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.497109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.497289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.497314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.497439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-24 20:52:33.497465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.080 qpair failed and we were unable to recover it. 00:24:38.080 [2024-07-24 20:52:33.497584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.497611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.497794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.497819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.497946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.497971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.498107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.498151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.498295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.498336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.498465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.498491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.498663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.498688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.498871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.498900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.499091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.499116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.499299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.499325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.499423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.499449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.499577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.499619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.499762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.499791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.499943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.499971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.500078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.500106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.500231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.500268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.500375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.500401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.500557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.500586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.500747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.500773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.500905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.500947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.501090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.501118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.501262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.501291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.501413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.501438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.501537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.501562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.501719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.501746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.501898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.501923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.502058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.502083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.502211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.502261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.502412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.502440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.502625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.502650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.502754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.502779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.502915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.502941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.503075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.503100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.503229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.503263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.503418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.503443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.503615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.503643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.503750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.503778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.503919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.503947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.504101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.504126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.504266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.504308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.504429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.504457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.081 [2024-07-24 20:52:33.504597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.081 [2024-07-24 20:52:33.504625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.081 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.504805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.504830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.505008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.505036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.505139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.505171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.505319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.505348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.505470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.505496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.505631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.505656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.505776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.505803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.505976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.506004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.506152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.506177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.506342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.506368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.506511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.506539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.506680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.506707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.506870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.506895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.507000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.507025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.507170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.507198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.507340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.507366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.507507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.507532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.507663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.507689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.507841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.507868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.508014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.508042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.508206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.508230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.508340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.508365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.508472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.508497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.508629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.508654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.508800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.508825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.508952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.508977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.509115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.509143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.509253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.509282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.509431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.509457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.509594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.509618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.509761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.509786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.509941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.509968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.510093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.510118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.510254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.510279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.510439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.510467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.510637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.510665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.510782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.510807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.510968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.510993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.511133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.511162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.511284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.511313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.511431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.511456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.511585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.511610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.082 qpair failed and we were unable to recover it. 00:24:38.082 [2024-07-24 20:52:33.511777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.082 [2024-07-24 20:52:33.511819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.511924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.511954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.512095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.512120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.512268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.512293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.512450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.512478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.512612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.512640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.512792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.512817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.512951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.512991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.513100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.513128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.513305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.513334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.513460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.513485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.513618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.513643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.513794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.513822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.513970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.513998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.514146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.514171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.514301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.514328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.514461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.514487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.514619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.514644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.514769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.514794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.514893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.514918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.515076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.515103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.515252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.515280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.515465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.515490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.515624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.515649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.515772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.515796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.515949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.515977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.516098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.516123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.516265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.516290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.516416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.516448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.516588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.516616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.516794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.516819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.516920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.516962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.517087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.517114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.517230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.517266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.517386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.517410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.517509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.517534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.517712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.517740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.517885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.517913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.518035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.518060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.518196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.083 [2024-07-24 20:52:33.518221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.083 qpair failed and we were unable to recover it. 00:24:38.083 [2024-07-24 20:52:33.518381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.518409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.518537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.518564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.518697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.518722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.518842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.518868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.518996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.519021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.519153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.519178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.519321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.519346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.519444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.519469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.519613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.519641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.519767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.519797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.519919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.519944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.520073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.520098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.520270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.520298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.520447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.520475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.520624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.520649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.520793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.520835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.520979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.521007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.521149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.521177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.521327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.521353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.521483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.521508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.521670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.521698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.521843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.521871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.522050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.522075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.522220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.522255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.522407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.522434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.522605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.522632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.522757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.522782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.522917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.522941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.523058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.523087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.523200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.523235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.523396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.523421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.523525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.523550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.523647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.523673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.523824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.523851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.524006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.524031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.524129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.524154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.524262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.524288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.524416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.524442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.524573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.524599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.524742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.524769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.524871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.524898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.525038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.525066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.525217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.525248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.525370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.525395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.084 [2024-07-24 20:52:33.525507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.084 [2024-07-24 20:52:33.525532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.084 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.525656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.525684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.525814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.525839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.525996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.526021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.526210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.526237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.526396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.526421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.526526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.526551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.526653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.526679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.526791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.526816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.526967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.526994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.527116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.527141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.527273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.527298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.527396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.527426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.527554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.527582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.527739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.527764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.527870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.527895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.528005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.528030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.528213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.528248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.528368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.528393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.528503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.528528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.528653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.528681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.528792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.528832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.528968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.528993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.529126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.529151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.529263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.529289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.529455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.529480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.529656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.529681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.529780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.529805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.529941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.529966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.530162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.530187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.530323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.530348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.530447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.530472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.530585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.530613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.530788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.530816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.530964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.530989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.531114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.531156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.531276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.531304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.531422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.531450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.531603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.531629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.531774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.531802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.531924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.531953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.532075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.532104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.532265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.532291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.532397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.085 [2024-07-24 20:52:33.532422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.085 qpair failed and we were unable to recover it. 00:24:38.085 [2024-07-24 20:52:33.532567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.532595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.532698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.532726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.532886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.532911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.533070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.533095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.533286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.533335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.533478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.533506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.533642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.533667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.533799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.533824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.533972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.533999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.534120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.534154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.534280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.534306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.534431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.534455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.534586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.534611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.534739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.534765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.534945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.534970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.535105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.535130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.535232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.535264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.535403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.535428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.535561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.535586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.535717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.535742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.535876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.535901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.536038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.536067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.536216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.536253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.536369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.536394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.536523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.536551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.536693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.536721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.536838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.536863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.536992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.537017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.537141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.537169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.537282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.537310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.537435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.537461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.537599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.537623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.537763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.537789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.537907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.537936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.538087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.538112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.538205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.538230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.538391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.538423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.538568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.538596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.538727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.538752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.538882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.538907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.539045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.539073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.539212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.539247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.539421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.539446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.539573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.539601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.539749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.086 [2024-07-24 20:52:33.539777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.086 qpair failed and we were unable to recover it. 00:24:38.086 [2024-07-24 20:52:33.539949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.539976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.540091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.540116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.540268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.540294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.540451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.540479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.540621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.540649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.540800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.540825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.540996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.541024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.541127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.541155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.541281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.541311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.541434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.541459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.541573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.541599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.541730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.541756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.541861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.541886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.542014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.542039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.542180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.542205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.542385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.542414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.542561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.542588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.542769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.542794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.542942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.542970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.543116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.543144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.543290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.543316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.543462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.543488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.543593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.543633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.543755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.543783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.543931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.543959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.544087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.544112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.544248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.544274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.544465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.544493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.544680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.544705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.544843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.544867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.545048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.545076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.545195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.545223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.545401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.545434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.545558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.545583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.545710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.545735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.545935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.545959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.546119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.546143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.546274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.546300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.546431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.546473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.546613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.546641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.546778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.546805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.546952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.087 [2024-07-24 20:52:33.546977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.087 qpair failed and we were unable to recover it. 00:24:38.087 [2024-07-24 20:52:33.547103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.547145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.547299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.547327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.547499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.547704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.547729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.547860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.547888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.548058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.548085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.548221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.548262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.548426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.548451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.548563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.548588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.548727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.548752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.548898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.548926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.549076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.549101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.549260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.549303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.549456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.549483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.549620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.549647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.549797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.549822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.549953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.549993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.550130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.550157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.550307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.550336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.550498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.550523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.550657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.550682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.550807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.550835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.550994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.551022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.551175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.551200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.551320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.551346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.551482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.551508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.551682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.551708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.551868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.551893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.552002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.552044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.552184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.552212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.552396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.552422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.552534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.552559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.552689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.552714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.552842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.552867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.552974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.552999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.553102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.553127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.553223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.553254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.553365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.553390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.553530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.553557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.553697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.553723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.553852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.553878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.553988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.554015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.554160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.554187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.554349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.554375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.554533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.554574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.554717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.554745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.554889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.554917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.555081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.555105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.555262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.555287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.555386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.555411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.555545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.555569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.555725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.088 [2024-07-24 20:52:33.555750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.088 qpair failed and we were unable to recover it. 00:24:38.088 [2024-07-24 20:52:33.555896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.555924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.556038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.556066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.556239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.556279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.556429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.556454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.556564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.556589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.556751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.556776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.556902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.556937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.557093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.557118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.557249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.557291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.557468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.557610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.557637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.557758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.557783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.557908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.557934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.558047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.558074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.558256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.558282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.558409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.558434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.558592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.558617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.558739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.558767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.558912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.558940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.559065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.559090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.559193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.559218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.559383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.559408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.559512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.559538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.559696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.559721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.559849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.559875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.560026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.560054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.560221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.560258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.560417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.560442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.560623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.560650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.560774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.560814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.560920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.560946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.561105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.561130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.561227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.561276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.561426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.561454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.561627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.561655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.561800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.561825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.561958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.561983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.562172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.562199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.562314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.562342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.562469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.562494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.562652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.562678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.562853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.562880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.563054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.563082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.563239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.563270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.563376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.563401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.563580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.563608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.563781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.563809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.563968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.563997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.564158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.564183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.564327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.564353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.564464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.564489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.564638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.564664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.564793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.564818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.565007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.565035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.565155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.565183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.565362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.565388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.565496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.565538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.565654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.565682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.565857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.565885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.566045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.566070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.566196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.566221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.566374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.566400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.566547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.566575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.566733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.566757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.566888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.566913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.567026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.567051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.567188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.567213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.567321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.567346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.567477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.567502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.567613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.089 [2024-07-24 20:52:33.567638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.089 qpair failed and we were unable to recover it. 00:24:38.089 [2024-07-24 20:52:33.567783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.567808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.567936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.567961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.568069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.568095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.568235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.568272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.568416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.568449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.568606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.568631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.568741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.568766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.568899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.568925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.569071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.569101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.569267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.569294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.569425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.569450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.569598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.569626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.569771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.569799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.569953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.569977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.570085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.570110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.570250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.570277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.570401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.570428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.570608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.570634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.570752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.570795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.570934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.570961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.571078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.571105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.571234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.571264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.571367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.571393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.571581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.571609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.571722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.571749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.571897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.571922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.572051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.572094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.572247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.572284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.572406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.572436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.572603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.572629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.572744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.572773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.572927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.572955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.573075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.573104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.573264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.573291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.573454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.573480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.573736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.573786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.573959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.573987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.574162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.574187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.574298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.574324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.574433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.574458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.574667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.574692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.574824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.574848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.574973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.574999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.575113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.575141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.575286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.575314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.575465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.575494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.575624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.575665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.575814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.575842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.575955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.575982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.576131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.576156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.576289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.576315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.576477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.576502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.576635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.576663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.576800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.576825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.576951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.576976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.577109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.577134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.577270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.577295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.577406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.577431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.577556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.577581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.577740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.577768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.577876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.577904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.090 [2024-07-24 20:52:33.578058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.090 [2024-07-24 20:52:33.578083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.090 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.578208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.578233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.578403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.578432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.578576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.578603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.578782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.578808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.578936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.578961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.579105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.579130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.579261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.579289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.579456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.579481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.579616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.579657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.579796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.579824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.579938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.579971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.580102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.580128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.580262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.580288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.580398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.580423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.580546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.580571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.580708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.580734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.580872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.580897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.581028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.581056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.581171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.581199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.581359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.581385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.581521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.581546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.581707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.581732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.581918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.581943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.582051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.582076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.582212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.582237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.582403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.582432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.582576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.582604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.582768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.582793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.582923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.582948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.583075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.583103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.583222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.583257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.583412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.583437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.583568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.583609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.583722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.583750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.583895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.583923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.584072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.584098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.584238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.584269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.584409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.584434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.584561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.584589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.584748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.584773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.584904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.584929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.585079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.585107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.585266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.585295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.585413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.585439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.585576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.585601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.585714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.585742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.585867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.585896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.586053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.586078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.586212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.586263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.586389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.586418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.586570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.586598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.586773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.586802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.586936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.586961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.587103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.587129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.587293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.587322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.587471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.587496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.587638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.587680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.587832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.587860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.587976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.588004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.588133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.588158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.588274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.588300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.588411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.588436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.588540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.588583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.588707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.588733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.588894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.588936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.589080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.589108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.589270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.589299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.589484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.589509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.589653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.589679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.589815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.091 [2024-07-24 20:52:33.589840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.091 qpair failed and we were unable to recover it. 00:24:38.091 [2024-07-24 20:52:33.590004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.092 [2024-07-24 20:52:33.590030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.092 qpair failed and we were unable to recover it. 00:24:38.092 [2024-07-24 20:52:33.590159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.092 [2024-07-24 20:52:33.590183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.092 qpair failed and we were unable to recover it. 00:24:38.092 [2024-07-24 20:52:33.590318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.092 [2024-07-24 20:52:33.590361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.092 qpair failed and we were unable to recover it. 00:24:38.092 [2024-07-24 20:52:33.590508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.092 [2024-07-24 20:52:33.590536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.092 qpair failed and we were unable to recover it. 00:24:38.373 [2024-07-24 20:52:33.590684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.373 [2024-07-24 20:52:33.590712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.373 qpair failed and we were unable to recover it. 00:24:38.373 [2024-07-24 20:52:33.590842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.590867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.590978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.591003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.591121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.591146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.591251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.591283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.591399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.591425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.591538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.591564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.591676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.591700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.591808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.591850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.591981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.592007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.592127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.592153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.592286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.592314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.592424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.592452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.592607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.592632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.592737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.592779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.592899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.592927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.593072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.593101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.593264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.593290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.593406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.593450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.593596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.593624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.593773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.593800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.593932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.593957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.594063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.594087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.594211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.594240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.594366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.594394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.594515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.594541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.594674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.594699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.594829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.594856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.594970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.594997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.595125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.595150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.595279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.595305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.595460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.595488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.595629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.595657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.595793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.595819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.595953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.595978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.596135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.596163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.596330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.596356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.374 qpair failed and we were unable to recover it. 00:24:38.374 [2024-07-24 20:52:33.596470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.374 [2024-07-24 20:52:33.596496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.596634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.596674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.596811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.596839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.596952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.596980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.597106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.597131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.597273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.597299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.597410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.597435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.597604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.597630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.597738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.597768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.597903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.597944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.598063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.598091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.598230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.598280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.598394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.598419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.598576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.598601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.598728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.598757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.598873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.598902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.599064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.599089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.599227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.599258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.599401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.599442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.599616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.599644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.599774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.599799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.599902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.599927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.600030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.600055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.600174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.600199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.600304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.600329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.600434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.600460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.600649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.600677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.600824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.600851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.601033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.601057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.601200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.601228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.601394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.601422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.601537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.601564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.601715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.601740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.601886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.601911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.602078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.602105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.602229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.602263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.602419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.602444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.602577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.375 [2024-07-24 20:52:33.602602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.375 qpair failed and we were unable to recover it. 00:24:38.375 [2024-07-24 20:52:33.602765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.602807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.602947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.602974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.603120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.603149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.603332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.603359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.603496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.603536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.603710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.603738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.603883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.603908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.604041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.604082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.604266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.604292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.604417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.604443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.604555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.604580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.604699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.604739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.604897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.604926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.605091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.605121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.605281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.605307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.605445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.605471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.605640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.605670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.605793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.605823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.605975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.606000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.606140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.606166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.606320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.606346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.606477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.606502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.606672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.606697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.606830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.606870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.607049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.607077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.607220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.607254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.607415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.607440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.607639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.607691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.607866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.607894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.608005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.608032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.608214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.608239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.608377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.608402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.608513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.608558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.608706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.608733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.608890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.608916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.609091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.609118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.609264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.376 [2024-07-24 20:52:33.609292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.376 qpair failed and we were unable to recover it. 00:24:38.376 [2024-07-24 20:52:33.609404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.609432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.609587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.609615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.609730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.609755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.609885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.609911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.610030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.610058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.610184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.610225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.610356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.610381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.610499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.610541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.610687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.610714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.610898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.610923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.611076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.611103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.611275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.611317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.611450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.611475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.611633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.611658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.611841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.611891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.612037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.612065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.612225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.612255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.612389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.612414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.612553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.612596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.612754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.612782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.612936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.612964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.613120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.613144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.613275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.613301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.613421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.613448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.613573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.613600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.613752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.613778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.613906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.613946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.614095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.614123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.614266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.614299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.614460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.614485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.614588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.614613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.614751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.614776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.614905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.614933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.615094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.615120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.615230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.377 [2024-07-24 20:52:33.615261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.377 qpair failed and we were unable to recover it. 00:24:38.377 [2024-07-24 20:52:33.615436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.615464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.615636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.615664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.615819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.615844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.615982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.616006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.616133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.616158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.616270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.616297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1695322 Killed "${NVMF_APP[@]}" "$@" 00:24:38.378 [2024-07-24 20:52:33.616406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.616431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.616533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.616558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.616685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.616713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:38.378 [2024-07-24 20:52:33.616859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.616888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:38.378 [2024-07-24 20:52:33.617020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.617045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.378 [2024-07-24 20:52:33.617178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.617203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.378 [2024-07-24 20:52:33.617336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.617366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.378 [2024-07-24 20:52:33.617514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.617542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.617702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.617727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.617899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.617925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.618081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.618109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.618261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.618290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.618455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.618480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.618616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.618642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.618803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.618831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.378 [2024-07-24 20:52:33.618977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.378 [2024-07-24 20:52:33.619005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.378 qpair failed and we were unable to recover it. 00:24:38.379 [2024-07-24 20:52:33.619132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.379 [2024-07-24 20:52:33.619157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.619325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.619368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.619493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.619521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.619666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.619694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.619845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.619870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.620004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.620046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.620191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.620219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.620350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.620377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.620587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.620613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.620774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.620802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.620965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.620991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.621105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.621130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.621266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.621292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.621404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.621429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.621547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.621575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1695888 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:38.380 [2024-07-24 20:52:33.621689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.621717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1695888 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.621859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.621885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1695888 ']' 00:24:38.380 [2024-07-24 20:52:33.621991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.622017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.380 [2024-07-24 20:52:33.622177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.622202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.380 [2024-07-24 20:52:33.622354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.622380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.380 [2024-07-24 20:52:33.622478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.622504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.380 [2024-07-24 20:52:33.622608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.622634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.622770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.622798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.622956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.622982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.623142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.623168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.623288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.623314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.623449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.623474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.623628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.623656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.623791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.623816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.623952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.623977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.624137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.624165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.624285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.624314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.624443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.380 [2024-07-24 20:52:33.624469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.380 qpair failed and we were unable to recover it. 00:24:38.380 [2024-07-24 20:52:33.624578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.624604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.624759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.624787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.624910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.624938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.625093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.625118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.625233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.625271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.625379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.625403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.625510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.625535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.625669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.625695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.625806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.625831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.625995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.626023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.626161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.626190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.626342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.626367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.626492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.626524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.626656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.626681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.626809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.626837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.626989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.627014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.627143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.627168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.627322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.627351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.627491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.627519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.627671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.627696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.627843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.627871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.628013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.628041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.628159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.628186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.628319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.628345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.628450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.628475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.628580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.628623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.628784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.628812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.629003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.629028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.629207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.629235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.629398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.629423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.629544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.629572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.629727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.629753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.629894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.629920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.630083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.630126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.630277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.630306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.630435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.630460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.381 qpair failed and we were unable to recover it. 00:24:38.381 [2024-07-24 20:52:33.630595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.381 [2024-07-24 20:52:33.630620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.630752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.630780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.630921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.630949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.631100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.631125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.631237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.631267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.631425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.631453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.631597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.631625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.631774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.631799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.631979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.632006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.632162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.632190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.632360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.632386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.632491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.632517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.632616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.632642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.632778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.632806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.632924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.632953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.633081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.633106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.633234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.633264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.633371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.633401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.633506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.633548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.633701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.633727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.633826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.633853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.634017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.634046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.634194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.634222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.634378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.634404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.634511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.634537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.634676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.634704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.634813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.634841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.634992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.635018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.635145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.635184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.635342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.635370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.635491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.635519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.635674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.635699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.635848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.635895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.636039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.636066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.636213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.636248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.636411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.636436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.636540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.636582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.636690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.382 [2024-07-24 20:52:33.636718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.382 qpair failed and we were unable to recover it. 00:24:38.382 [2024-07-24 20:52:33.636838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.636866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.637040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.637065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.637256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.637285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.637397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.637425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.637538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.637567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.637693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.637719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.637853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.637878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.638041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.638070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.638195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.638223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.638360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.638386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.638491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.638516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.638655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.638682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.638850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.638877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.639009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.639034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.639149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.639175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.639292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.639318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.639451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.639476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.639610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.639634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.639738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.639763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.639906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.639932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.640055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.640095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.640220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.640254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.640366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.640392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.640531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.640557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.640672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.640697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.640835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.640860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.641008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.641033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.641172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.641198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.641312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.641338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.641452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.641478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.641614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.641640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.641775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.641801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.641911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.642042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.642073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.642177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.642202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.642321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.383 [2024-07-24 20:52:33.642346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.383 qpair failed and we were unable to recover it. 00:24:38.383 [2024-07-24 20:52:33.642450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.642476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.642608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.642634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.642781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.642807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.642940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.642965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.643130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.643157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.643269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.643295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.643440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.643465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.643638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.643664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.643811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.643837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.643973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.643998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.644158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.644183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.644324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.644351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.644481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.644507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.644647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.644673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.644805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.644830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.644964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.644990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.645127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.645154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.645271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.645297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.645396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.645421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.645580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.645605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.645736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.645762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.645860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.645885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.646016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.646042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.646153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.646179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.646312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.646341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.646452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.646478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.646641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.646666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.646795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.646821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.646954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.646980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.647095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.384 [2024-07-24 20:52:33.647120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.384 qpair failed and we were unable to recover it. 00:24:38.384 [2024-07-24 20:52:33.647264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.647290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.647401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.647426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.647555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.647580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.647739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.647764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.647902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.647928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.648064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.648089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.648197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.648223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.648359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.648385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.648529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.648554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.648683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.648708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.648823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.648848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.648984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.649009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.649166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.649191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.649323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.649349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.649483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.649508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.649667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.649693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.649788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.649813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.649952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.649977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.650125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.650150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.650294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.650320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.650452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.650477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.650615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.650640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.650768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.650793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.650936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.650961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.651060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.651085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.651187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.651213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.651334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.651360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.651485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.651511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.651643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.651669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.651801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.651827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.651970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.651996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.652105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.652130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.652285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.652311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.652450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.652476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.385 [2024-07-24 20:52:33.652589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.385 [2024-07-24 20:52:33.652619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.385 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.652767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.652792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.652914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.652940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.653080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.653105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.653255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.653280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.653391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.653416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.653549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.653574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.653720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.653746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.653875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.653901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.654065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.654090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.654197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.654222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.654270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680230 (9): Bad file descriptor 00:24:38.386 [2024-07-24 20:52:33.654463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.654501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.654626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.654652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.654803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.654829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.654962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.654988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.655100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.655125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.655227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.655261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.655407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.655433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.655561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.655586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.655721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.655747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.655881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.655907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.656008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.656033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.656144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.656169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.656307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.656333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.656471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.656496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.656605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.656630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.656735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.656760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.656897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.656922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.657053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.657079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.657206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.657231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.657358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.657384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.657511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.657537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.657650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.657676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.657785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.657810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.657943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.657968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.386 [2024-07-24 20:52:33.658101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.386 [2024-07-24 20:52:33.658126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.386 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.658239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.658272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.658402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.658427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.658538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.658563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.658695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.658721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.658823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.658852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.658955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.658980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.659109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.659135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.659274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.659300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.659426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.659451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.659556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.659581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.659711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.659736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.659886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.659912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.660017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.660042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.660152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.660177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.660318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.660344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.660452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.660478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.660582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.660607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.660733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.660758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.660892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.660917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.661026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.661052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.661188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.661212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.661320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.661345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.661451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.661476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.661580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.661605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.661733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.661758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.661900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.661925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.662086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.662111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.662249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.662274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.662432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.662456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.662563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.662590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.662751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.662776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.662886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.662916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.663050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.663075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.663180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.663209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.663324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.663351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.387 qpair failed and we were unable to recover it. 00:24:38.387 [2024-07-24 20:52:33.663453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-07-24 20:52:33.663479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.663617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.663642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.663773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.663798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.663955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.663981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.664095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.664120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.664234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.664268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.664371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.664397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.664528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.664553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.664678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.664703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.664807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.664833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.664950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.664935] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:38.388 [2024-07-24 20:52:33.664976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.665013] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.388 [2024-07-24 20:52:33.665103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.665128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.665234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.665266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.665422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.665448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.665552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.665578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.665714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.665739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.665899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.665924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.666053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.666078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.666182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.666207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.666315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.666341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.666456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.666482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.666620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.666645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.666755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.666784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.666927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.666952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.667059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.667084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.667194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.667220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.667328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.667353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.667477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.667504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.667641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.667666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.667792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.667817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.667949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.667975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.668083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.668108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.388 qpair failed and we were unable to recover it. 00:24:38.388 [2024-07-24 20:52:33.668248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.388 [2024-07-24 20:52:33.668274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.668410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.668436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.668540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.668567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.668706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.668731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.668842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.668870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.668970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.668996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.669128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.669154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.669288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.669314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.669453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.669479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.669638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.669664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.669796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.669822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.669932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.669958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.670119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.670145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.670285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.670311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.670440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.670466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.670577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.670602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.670733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.670758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.670883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.670908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.671030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.671055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.671165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.671190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.671323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.671349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.671460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.671485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.671612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.671637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.671768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.671793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.671921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.671947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.672055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.672080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.672218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.672248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.672381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.672407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.672512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.672537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.672670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.672695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.389 [2024-07-24 20:52:33.672841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.389 [2024-07-24 20:52:33.672866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.389 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.672970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.672999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.673144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.673169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.673304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.673330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.673436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.673461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.673598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.673623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.673752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.673777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.673898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.673923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.674051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.674077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.674209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.674235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.674367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.674393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.674529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.674554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.674658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.674683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.674790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.674814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.674943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.674968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.675100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.675125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.675295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.675320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.675452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.675477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.675620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.675645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.675780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.675806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.675911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.675936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.676095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.676120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.676230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.676261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.676395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.676421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.676526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.676552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.676664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.676690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.676792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.676817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.676916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.676942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.677039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.677068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.677182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.677207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.677351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.677376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.677475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.677500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.677605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.677631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.677790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.677815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.677923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.677948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.678087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.678112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.678251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.390 [2024-07-24 20:52:33.678277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.390 qpair failed and we were unable to recover it. 00:24:38.390 [2024-07-24 20:52:33.678409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.678435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.678551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.678576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.678712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.678737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.678878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.678903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.679005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.679030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.679159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.679201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.679370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.679399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.679562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.679588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.679729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.679754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.679890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.679915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.680022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.680048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.680159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.680186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.680313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.680353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.680501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.680528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.680668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.680695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.680853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.680878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.680990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.681015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.681168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.681196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.681350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.681388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.681538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.681564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.681727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.681753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.681900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.681925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.682038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.682063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.682198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.682224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.682371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.682397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.682526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.682552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.682663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.682689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.682850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.682876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.683010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.683036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.683167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.683193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.683302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.683328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.683461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.683487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.683627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.683652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.683792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.683818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.683920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.683946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.684055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.684081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.391 [2024-07-24 20:52:33.684199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.391 [2024-07-24 20:52:33.684238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.391 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.684383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.684411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.684549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.684574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.684706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.684731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.684840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.684865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.684994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.685021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.685184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.685210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.685326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.685353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.685483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.685512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.685619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.685651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.685757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.685783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.685928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.685954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.686097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.686122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.686259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.686284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.686415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.686440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.686549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.686575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.686676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.686702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.686803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.686829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.686947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.686972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.687102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.687127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.687253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.687280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.687423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.687450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.687579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.687605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.687772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.687798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.687927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.687952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.688065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.688090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.688254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.688294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.688460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.688487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.688624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.688651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.688756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.688783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.688934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.688959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.689094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.689120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.689224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.689264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.689377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.689403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.689511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.689536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.689698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.689724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.689864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.392 [2024-07-24 20:52:33.689890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.392 qpair failed and we were unable to recover it. 00:24:38.392 [2024-07-24 20:52:33.690025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.690051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.690184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.690210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.690458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.690497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.690613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.690640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.690773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.690799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.690933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.690958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.691087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.691112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.691216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.691248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.691381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.691407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.691558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.691583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.691716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.691741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.691850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.691875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.691977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.692002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.692120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.692145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.692282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.692308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.692517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.692543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.692682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.692709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.692838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.692863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.692966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.692992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.693120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.693159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.693313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.693340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.693477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.693503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.693633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.693659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.693798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.693823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.693960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.693985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.694100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.694126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.694248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.694287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.694459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.694488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.694601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.694628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.694767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.694793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.694905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.694931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.695067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.695092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.695225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.695257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.695366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.695392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.695494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.695520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.695625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.695652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.393 [2024-07-24 20:52:33.695763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.393 [2024-07-24 20:52:33.695789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.393 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.695948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.695974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.696098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.696124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.696239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.696278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.696381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.696408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.696546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.696573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.696692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.696718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.696829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.696857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.696989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.697015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.697164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.697191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.697324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.697350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.697486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.697512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.697647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.697673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.394 [2024-07-24 20:52:33.697787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.697815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.697950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.697975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.698135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.698161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.698266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.698292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.698429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.698455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.698589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.698616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.698752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.698778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.698915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.698942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.699053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.699079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.699195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.699221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.699339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.699365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.699479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.699506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.699680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.699706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.699837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.699863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.699994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.700019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.700133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.700159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.700305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.700343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.394 [2024-07-24 20:52:33.700464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.394 [2024-07-24 20:52:33.700490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.394 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.700599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.700625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.700767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.700793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.700901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.700926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.701048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.701073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.701230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.701264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.701368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.701393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.701515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.701541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.701663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.701688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.701817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.701843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.701963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.701989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.702099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.702125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.702235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.702267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.702377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.702407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.702540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.702565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.702666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.702691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.702802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.702827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.702934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.702959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.703066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.703092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.703201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.703226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.703348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.703375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.703518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.703544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.703681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.703706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.703837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.703862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.703992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.704017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.704149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.704176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.704319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.704347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.704462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.704487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.704638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.704663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.704830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.704854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.704963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.704988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.705098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.705123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.705252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.705277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.705413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.705438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.705542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.705567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.705691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.705717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.395 [2024-07-24 20:52:33.705830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.395 [2024-07-24 20:52:33.705854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.395 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.705996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.706021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.706147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.706172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.706291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.706317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.706450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.706475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.706597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.706623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.706783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.706809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.706945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.706970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.707084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.707109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.707249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.707276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.707415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.707441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.707542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.707567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.707712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.707737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.707866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.707891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.708035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.708061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.708170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.708195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.708325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.708351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.708457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.708483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.708624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.708664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.708813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.708840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.708946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.708972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.709133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.709158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.709272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.709299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.709436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.709462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.709609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.709635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.709748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.709773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.709930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.709956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.710070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.710096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.710277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.710315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.710457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.710484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.710581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.710607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.710768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.710799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.710915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.710940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.711077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.711102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.711204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.711230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.396 [2024-07-24 20:52:33.711349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.396 [2024-07-24 20:52:33.711375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.396 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.711507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.711532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.711665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.711690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.711835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.711862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.711977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.712003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.712147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.712172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.712291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.712318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.712428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.712454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.712593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.712618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.712750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.712775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.712891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.712917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.713064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.713103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.713228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.713274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.713409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.713437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.713606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.713632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.713769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.713796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.713935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.713961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.714078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.714104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.714240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.714274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.714403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.714428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.714563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.714589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.714697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.714723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.714879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.714904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.715041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.715070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.715216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.715247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.715363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.715390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.715493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.715519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.715618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.715643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.715780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.715805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.715938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.715965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.716105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.716130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.716250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.716276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.716413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.716438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.716570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.716595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.397 [2024-07-24 20:52:33.716699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.397 [2024-07-24 20:52:33.716725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.397 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.716825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.716850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.716979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.717009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.717132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.717171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.717293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.717322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.717457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.717483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.717637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.717663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.717766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.717791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.717923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.717949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.718080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.718105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.718232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.718278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.718425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.718452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.718578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.718604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.718739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.718765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.718895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.718920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.719033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.719061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.719176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.719201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.719317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.719344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.719450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.719475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.719584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.719610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.719737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.719763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.719894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.719919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.720050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.720075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.720177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.720202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.720371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.720399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.720532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.720557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.720664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.720689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.720790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.720814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.720926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.720954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.721057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.721087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.721218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.721248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.721353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.721379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.398 [2024-07-24 20:52:33.721496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.398 [2024-07-24 20:52:33.721521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.398 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.721656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.721682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.721812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.721838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.721977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.722003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.722129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.722154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.722294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.722320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.722441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.722466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.722603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.722628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.722737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.722763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.722929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.722955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.723066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.723095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.723264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.723302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.723414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.723440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.723552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.723577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.723737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.723762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.723926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.723951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.724063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.724089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.724252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.724278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.724381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.724407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.724545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.724572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.724681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.724706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.724868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.724893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.725008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.725034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.725172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.725211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.725342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.725381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.725499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.725526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.725641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.725668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.725775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.725800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.725936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.725963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.726074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.726102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.399 [2024-07-24 20:52:33.726249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.399 [2024-07-24 20:52:33.726279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.399 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.726419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.726445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.726561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.726588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.726726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.726751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.726886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.726911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.727042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.727068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.727183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.727210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.727355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.727382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.727493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.727518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.727656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.727681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.727780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.727806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.727918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.727943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.728053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.728079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.728180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.728206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.728333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.728361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.728496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.728522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.728623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.728649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.728753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.728778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.728911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.728936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.729071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.729096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.729249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.729275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.729416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.729441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.729551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.729576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.729673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.729698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.729844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.729870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.729970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.729995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.730105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.730133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.730269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.730295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.730430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.730455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.730553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.730579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.730705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.730731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.730834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.730859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.730996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.731023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.400 [2024-07-24 20:52:33.731131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.400 [2024-07-24 20:52:33.731157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.400 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.731273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.731303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.731413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.731441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.731573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.731599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.731737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.731763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.731902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.731928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.731986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.401 [2024-07-24 20:52:33.732050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.732074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.732213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.732257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.732373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.732400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.732515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.732540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.732698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.732723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.732849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.732874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.733052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.733077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.733236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.733267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.733399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.733429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.733580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.733605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.733776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.733801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.733902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.733928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.734035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.734061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.734178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.734205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.734356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.734383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.734522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.734549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.734684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.734711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.734813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.734840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.734968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.735007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.735118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.735145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.735278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.735317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.735429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.735456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.735595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.735621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.735782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.735808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.735969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.735994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.736101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.736127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.736238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.736271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.736435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.736460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.736571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-07-24 20:52:33.736596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.401 qpair failed and we were unable to recover it. 00:24:38.401 [2024-07-24 20:52:33.736711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.736736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.736845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.736872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.736986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.737012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.737172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.737198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.737319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.737347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.737483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.737509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.737676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.737702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.737809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.737835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.737942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.737969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.738082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.738109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.738221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.738251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.738411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.738436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.738548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.738573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.738673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.738698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.738834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.738859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.738990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.739015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.739178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.739203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.739334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.739373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.739514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.739542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.739696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.739727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.739865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.739892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.739996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.740022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.740163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.740189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.740315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.740342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.740518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.740544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.740680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.740706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.740841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.740866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.741017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.741044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.741183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.741209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.741321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.741347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.741459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.741485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.741628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.741654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.741783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.741808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.741973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.741999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.742103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.742129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.402 qpair failed and we were unable to recover it. 00:24:38.402 [2024-07-24 20:52:33.742247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.402 [2024-07-24 20:52:33.742277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.742400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.742438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.742581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.742607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.742729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.742754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.742883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.742908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.743045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.743070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.743184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.743211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.743327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.743354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.743497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.743523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.743665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.743690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.743799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.743824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.743941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.743967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.744085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.744112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.744239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.744284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.744398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.744425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.744540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.744566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.744674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.744700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.744842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.744868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.745029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.745054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.745178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.745215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.745379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.745421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.745563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.745591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.745741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.745768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.745873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.745899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.746039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.746070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.746182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.746209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.746355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.746382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.746521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.746546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.746658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.746683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.746817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.746843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.746961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.746986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.747093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.747119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.747234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.747266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.747422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.403 [2024-07-24 20:52:33.747450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.403 qpair failed and we were unable to recover it. 00:24:38.403 [2024-07-24 20:52:33.747569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.747595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.747726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.747752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.747859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.747885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.748009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.748048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.748180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.748208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.748415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.748442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.748556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.748582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.748691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.748716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.748857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.748883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.749023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.749048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.749185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.749210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.749377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.749403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.749545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.749570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.749679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.749705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.749814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.749840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.749958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.749983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.750093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.750118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.750259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.750290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.750397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.750423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.750548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.750574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.750695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.750720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.750834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.750859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.750973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.750998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.751102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.751127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.751264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.751289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.751400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.751425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.751539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.751565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.751695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.751721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.404 [2024-07-24 20:52:33.751858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.404 [2024-07-24 20:52:33.751883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.404 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.752004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.752033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.752173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.752202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.752378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.752405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.752508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.752536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.752651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.752676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.752787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.752813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.752945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.752971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.753082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.753109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.753265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.753304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.753453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.753480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.753612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.753637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.753748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.753775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.753886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.753911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.754050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.754076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.754186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.754211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.754348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.754375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.754549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.754574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.754683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.754708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.754838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.754864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.754968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.754993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.755101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.755130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.755239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.755270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.755384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.755409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.755547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.755572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.755705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.755730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.755868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.755893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.756053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.756079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.756217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.756253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.756387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.756430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.756573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.756600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.756708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.756736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.756844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.756870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.757011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.757039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.405 [2024-07-24 20:52:33.757203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.405 [2024-07-24 20:52:33.757229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.405 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.757347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.757373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.757474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.757500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.757608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.757633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.757759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.757784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.757926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.757951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.758056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.758081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.758200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.758225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.758340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.758368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.758507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.758533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.758638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.758664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.758799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.758825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.758923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.758948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.759088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.759113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.759213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.759239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.759390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.759416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.759519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.759544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.759649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.759675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.759803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.759828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.759953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.759978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.760097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.760122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.760237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.760283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.760451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.760478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.760592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.760618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.760727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.760753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.760874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.760899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.761004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.761029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.761156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.761183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.761294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.761320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.761467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.761492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.761594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.761619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.761760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.761785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.761921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.761946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.762070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.762096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.406 [2024-07-24 20:52:33.762206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.406 [2024-07-24 20:52:33.762231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.406 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.762376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.762408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.762516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.762542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.762652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.762679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.762820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.762845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.762976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.763002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.763131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.763158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.763288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.763315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.763415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.763441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.763546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.763572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.763712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.763739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.763888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.763914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.764026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.764053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.764179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.764205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.764334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.764363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.764483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.764509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.764651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.764676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.764817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.764842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.764987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.765118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.765283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.765417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.765542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.765678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.765839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.765972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.765997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.766094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.766119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.766221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.766252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.766404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.766434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.766539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.766564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.766694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.766719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.766864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.766889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.766995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.767023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.407 [2024-07-24 20:52:33.767142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.407 [2024-07-24 20:52:33.767179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.407 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.767344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.767382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.767502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.767532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.767663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.767689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.767814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.767840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.767959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.767984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.768085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.768111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.768240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.768274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.768385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.768412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.768538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.768564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.768701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.768728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.768888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.768914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.769037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.769075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.769197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.769227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.769373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.769400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.769537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.769562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.769667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.769692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.769798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.769825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.769989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.770016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.770159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.770188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.770288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.770314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.770450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.770476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.770617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.770644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.770776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.770801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.770913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.770940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.771051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.771076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.771213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.771239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.771391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.771416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.771519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.771545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.771672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.771697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.771836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.771861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.771971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.771996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.772103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.772128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.772234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.772265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.772407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.772433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.772531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.772562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.772704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.772730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.772870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.408 [2024-07-24 20:52:33.772895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.408 qpair failed and we were unable to recover it. 00:24:38.408 [2024-07-24 20:52:33.773004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.773029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.773186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.773211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.773406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.773445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.773568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.773597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.773761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.773788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.773900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.773925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.774060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.774087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.774199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.774225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.774350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.774375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.774484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.774510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.774628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.774654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.774804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.774830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.774934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.774961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.775092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.775117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.775262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.775289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.775394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.775420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.775566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.775591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.775691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.775717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.775833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.775859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.775966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.775992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.776125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.776151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.776277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.776316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.776472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.776498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.776611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.776636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.776748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.776776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.776886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.776912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.777015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.777041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.777143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.777168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.777278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.777305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.777449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.777477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.777644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.777670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.409 qpair failed and we were unable to recover it. 00:24:38.409 [2024-07-24 20:52:33.777777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.409 [2024-07-24 20:52:33.777804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.777940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.777965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.778070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.778096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.778202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.778228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.778374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.778401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.778558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.778584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.778690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.778715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.778849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.778874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.778988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.779014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.779136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.779175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.779313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.779341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.779447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.779473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.779579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.779606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.779712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.779739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.779905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.779931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.780033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.780058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.780163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.780188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.780334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.780360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.780468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.780493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.780654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.780680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.780799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.780825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.780956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.780984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.781124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.781154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.781304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.781330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.781443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.781469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.781613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.781639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.781804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.781829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.781940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.781969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.782077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.782105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.782263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.782301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.782444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.782470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.782631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.782657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.782759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.782785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.782896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.782927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.783036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.783063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.783202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.783227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.783342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.783368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.783507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.783533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.783634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.783659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.783787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.783813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.783941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.410 [2024-07-24 20:52:33.783966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.410 qpair failed and we were unable to recover it. 00:24:38.410 [2024-07-24 20:52:33.784070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.784095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.784254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.784293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.784408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.784436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.784570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.784595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.784699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.784724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.784859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.784885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.785035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.785060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.785200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.785228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.785344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.785369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.785475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.785500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.785631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.785656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.785792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.785817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.785952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.785977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.786076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.786102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.786215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.786247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.786386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.786412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.786518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.786543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.786669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.786694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.786797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.786822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.786938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.786963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.787097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.787267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.787293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.787392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.787418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.787557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.787582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.787691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.787716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.787829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.787855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.787988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.788014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.788150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.788175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.788279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.788305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.788473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.788499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.788632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.788657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.788819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.788844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.788973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.789005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.789113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.789138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.789304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.789330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.789433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.789459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.789590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.789616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.411 [2024-07-24 20:52:33.789720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.411 [2024-07-24 20:52:33.789744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.411 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.789888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.789912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.790025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.790051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.790192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.790217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.790361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.790386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.790496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.790522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.790659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.790684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.790816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.790841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.791025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.791168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.791329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.791458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.791585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.791719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.791874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.791977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.792002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.792132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.792172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.792301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.792340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.792484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.792511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.792652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.792678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.792779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.792804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.792905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.792930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.793031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.793059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.793211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.793260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.793383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.793411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.793550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.793576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.793728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.793754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.793866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.793892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.794036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.794061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.794199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.794225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.794359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.794398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.794509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.794536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.794664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.794690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.794826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.794851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.794961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.794986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.795127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.795152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.795277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.795304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.795456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.795481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.795611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.795636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.795774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.795799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.795913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.795941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.796049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.796075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.412 [2024-07-24 20:52:33.796256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.412 [2024-07-24 20:52:33.796296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.412 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.796418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.796447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.796588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.796613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.796750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.796775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.796906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.796934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.797075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.797101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.797216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.797247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.797403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.797430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.797575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.797601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.797702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.797727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.797867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.797893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.798041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.798066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.798172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.798197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.798355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.798381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.798489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.798513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.798643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.798668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.798778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.798802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.798919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.798946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.799058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.799084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.799228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.799263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.799373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.799403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.799509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.799535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.799638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.799663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.799809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.799834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.799975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.800001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.800113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.800139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.800262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.800288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.800395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.800420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.800546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.800572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.800709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.800733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.800840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.800866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.801001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.801026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.801138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.801163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.801281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.801306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.801462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.801488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.801604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.801629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.801793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.801819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.801941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.801966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.802089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.802115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.802258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.802284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.802402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.802427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.802549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.802575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.802735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.802760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.413 [2024-07-24 20:52:33.802863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.413 [2024-07-24 20:52:33.802889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.413 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.802989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.803014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.803126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.803151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.803293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.803319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.803431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.803457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.803576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.803602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.803705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.803731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.803867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.803893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.804012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.804051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.804214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.804262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.804378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.804406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.804545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.804570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.804673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.804699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.804867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.804893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.805031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.805056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.805188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.805233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.805360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.805388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.805497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.805523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.805662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.805689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.805829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.805855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.805987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.806013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.806130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.806169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.806315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.806343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.806456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.806482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.806600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.806625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.806761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.806786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.806926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.806951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.807088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.807113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.807220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.807264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.807402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.807427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.807559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.807584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.807685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.807716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.807823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.807849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.807962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.807988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.808129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.808157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.808276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.808303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.808447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.808473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.808590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.414 [2024-07-24 20:52:33.808616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.414 qpair failed and we were unable to recover it. 00:24:38.414 [2024-07-24 20:52:33.808777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.808801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.808937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.808963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.809103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.809127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.809236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.809269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.809405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.809431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.809577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.809602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.809737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.809763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.809872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.809897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.810034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.810061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.810198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.810224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.810377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.810402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.810511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.810537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.810677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.810703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.810810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.810835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.810964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.810989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.811123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.811149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.811279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.811305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.811411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.811436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.811540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.811566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.811673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.811699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.811854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.811895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.812050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.812079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.812191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.812217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.812353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.812379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.812515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.812541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.812672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.812697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.812826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.812852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.812987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.813012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.813113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.813139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.813275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.813303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.813407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.813433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.813570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.813595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.813703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.813729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.813835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.813865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.814005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.814031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.814160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.814186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.814305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.814332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.814446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.814472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.814615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.814641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.814751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.814777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.814941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.814968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.815081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.815110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.815250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.815289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.815420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.815449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.815594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.815621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.815731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.415 [2024-07-24 20:52:33.815757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.415 qpair failed and we were unable to recover it. 00:24:38.415 [2024-07-24 20:52:33.815869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.815894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.816011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.816037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.816157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.816196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.816386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.816425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.816547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.816575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.816692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.816720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.816842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.816868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.816974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.817000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.817112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.817138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.817295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.817334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.817459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.817488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.817638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.817664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.817805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.817831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.817967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.817992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.818134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.818165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.818284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.818311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.818429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.818454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.818624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.818649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.818754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.818779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.818896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.818935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.819049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.819076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.819181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.819207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.819321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.819349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.819479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.819505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.819609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.819634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.819796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.819821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.819929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.819955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.820118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.820147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.820294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.820328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.820441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.820466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.820606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.820631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.820732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.820757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.820891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.820916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.821024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.821052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.821181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.821219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.821377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.821416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.821539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.821567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.821682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.821708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.821832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.821858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.821975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.822003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.822127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.822166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.822292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.822322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.822447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.822474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.822642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.822667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.822787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.822813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.822941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.822968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.416 qpair failed and we were unable to recover it. 00:24:38.416 [2024-07-24 20:52:33.823090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.416 [2024-07-24 20:52:33.823118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.823248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.823293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.823415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.823442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.823574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.823601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.823735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.823761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.823921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.823948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.824052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.824078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.824222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.824275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.824381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.824409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.824547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.824572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.824705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.824731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.824839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.824864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.825026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.825169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.825309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.825447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.825615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.825773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.825895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.825996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.826022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.826125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.826150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.826305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.826343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.826480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.826519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.826650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.826689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.826828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.826855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.826990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.827017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.827158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.827183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.827313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.827341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.827450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.827476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.827587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.827613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.827752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.827777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.827887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.827912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.828043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.828070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.828178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.828203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.828345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.828385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.828502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.828535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.828701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.828727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.828833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.828860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.829020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.829045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.829154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.829180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.829345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.417 [2024-07-24 20:52:33.829372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.417 qpair failed and we were unable to recover it. 00:24:38.417 [2024-07-24 20:52:33.829473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.829498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.829653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.829691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.829812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.829840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.829944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.829971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.830117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.830143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.830279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.830306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.830406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.830430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.830538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.830564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.830707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.830732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.830865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.830891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.831027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.831055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.831170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.831198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.831308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.831334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.831449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.831474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.831641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.831667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.831767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.831792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.831901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.831926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.832066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.832091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.832223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.832258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.832367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.832393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.832526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.832556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.832708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.832746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.832897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.832925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.833065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.833091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.833222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.833260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.833375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.833401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.833513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.833542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.833653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.833679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.833789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.833814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.833967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.833992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.834097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.834123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.834257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.834296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.834411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.834438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.834564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.834589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.834750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.834775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.834892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.834919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.835030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.835055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.835164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.835189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.835313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.835339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.835479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.835504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.835611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.835636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.835780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.418 [2024-07-24 20:52:33.835805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.418 qpair failed and we were unable to recover it. 00:24:38.418 [2024-07-24 20:52:33.835939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.835964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.836097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.836122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.836226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.836258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.836373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.836399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.836513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.836538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.836642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.836667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.836766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.836796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.836910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.836935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.837065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.837090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.837211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.837256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.837405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.837433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.837544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.837569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.837731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.837757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.837915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.837940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.838082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.838107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.838247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.838274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.838405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.838430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.838592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.838617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.838773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.838798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.838932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.838957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.839094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.839119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.839227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.839260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.839397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.839422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.839560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.839585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.839686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.839711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.839857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.839882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.840040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.840066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.840175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.840200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.840353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.840379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.840497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.840522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.840630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.840655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.840781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.840806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.840926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.840953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.841087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.841116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.841230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.841266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.841376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.841401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.841510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.841544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.841650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.841675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.841818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.841844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.841971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.841996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.842127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.842152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.842304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.842343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.842451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.842478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.842595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.842621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.842752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.842778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.842912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.842938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.843034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.843060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.843177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.419 [2024-07-24 20:52:33.843204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.419 qpair failed and we were unable to recover it. 00:24:38.419 [2024-07-24 20:52:33.843316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.843342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.843456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.843481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.843587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.843612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.843715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.843740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.843901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.843926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.844063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.844088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.844198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.844223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.844400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.844440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.844591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.844619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.844755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.844781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.844900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.844927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.845072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.845099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.845265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.845297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.845406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.845432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.845543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.845569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.845674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.845700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.845862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.845887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.846024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.846051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.846181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.846220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.846364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.846391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.846522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.846554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.846713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.846738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.846846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.846872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.846991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.847017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.847139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.847165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.847280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.847307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.847415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.847441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.847541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.847566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.847668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.847693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.847843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.847882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.847996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.848127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.848276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.848402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.848529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.848675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.848806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.848839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.420 [2024-07-24 20:52:33.848873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.420 [2024-07-24 20:52:33.848888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.420 [2024-07-24 20:52:33.848900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.420 [2024-07-24 20:52:33.848910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.420 [2024-07-24 20:52:33.848934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.848958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.849069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.849093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 [2024-07-24 20:52:33.848998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.849026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:38.420 [2024-07-24 20:52:33.849195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.849221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.849053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:38.420 [2024-07-24 20:52:33.849057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:38.420 [2024-07-24 20:52:33.849344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.849380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.849487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.849515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.849639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.849665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.849780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.849806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.849913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.849938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.420 [2024-07-24 20:52:33.850090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.420 [2024-07-24 20:52:33.850115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.420 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.850235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.850270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.850378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.850403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.850502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.850528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.850641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.850673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.850804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.850831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.850936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.850962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.851065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.851090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.851201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.851227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.851339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.851365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.851481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.851509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.851634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.851659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.851762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.851788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.851886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.851911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.852056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.852081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.852201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.852258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.852421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.852448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.852583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.852616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.852730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.852757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.852873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.852899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.852998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.853024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.853134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.853160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.853299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.853326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.853442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.853469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.853600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.853626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.853740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.853765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.853898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.853924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.854036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.854063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.854175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.854200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.854354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.854381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.854495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.854521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.854653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.854692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.854818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.854844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.854956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.854982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.855095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.855122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.855228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.855268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.855380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.855406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.855518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.855544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.855681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.855706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.855823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.855848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.855958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.855983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.856091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.856116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.856263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.421 [2024-07-24 20:52:33.856289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.421 qpair failed and we were unable to recover it. 00:24:38.421 [2024-07-24 20:52:33.856398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.856424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.856531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.856569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.856677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.856703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.856857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.856883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.856980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.857006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.857150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.857176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.857310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.857348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.857468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.857494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.857604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.857630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.857779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.857804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.857914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.857939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.858047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.858072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.858181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.858206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.858322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.858348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.858459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.858486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.858624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.858763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.858793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.858908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.858934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.859052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.859078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.859214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.859257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.859373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.859398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.859497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.859522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.859636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.859661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.859787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.859812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.859979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.860005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.860133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.860173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.860331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.860359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.860461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.860488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.860605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.860633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.860772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.860798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.860898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.860924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.861031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.861056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.861163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.861189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.422 [2024-07-24 20:52:33.861325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.422 [2024-07-24 20:52:33.861365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.422 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.861497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.861525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.861640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.861666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.861767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.861792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.861899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.861924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.862034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.862059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.862170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.862195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.862317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.862343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.862442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.862467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.862584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.862611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.862747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.862773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.862879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.862906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.863020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.863045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.863146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.863172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.863303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.863330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.863446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.863471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.863587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.863613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.863718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.863744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.863893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.863918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.864028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.864054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.864175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.864201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.864319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.864345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.864486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.864532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.864668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.864695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.864804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.864829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.864983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.865009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.865115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.865141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.865252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.865277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.865389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.865415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.865571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.865597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.865740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.865767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.865898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.865923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.866055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.866083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.866198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.866224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.866335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.866363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.866471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.866497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.866619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.866645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.866773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.423 [2024-07-24 20:52:33.866799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.423 qpair failed and we were unable to recover it. 00:24:38.423 [2024-07-24 20:52:33.866909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.866934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.867074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.867099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.867206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.867249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.867358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.867384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.867488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.867517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.867632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.867658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.867768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.867794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.867913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.867952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.868068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.868095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.868235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.868287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.868411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.868439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.868560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.868600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.868718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.868746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.868898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.868926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.869031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.869057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.869159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.869185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.869312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.869339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.869445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.869472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.869577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.869603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.869719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.869745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.869877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.869902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.870007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.870033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.870142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.870171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.870294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.870323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.870443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.870474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.870600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.870625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.870738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.870764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.870912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.870938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.871051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.871077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.871197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.871236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.871380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.871409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.871518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.871552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.871701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.871729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.871847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.871873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.871983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.872022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.872143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.872169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.872310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.872336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.424 qpair failed and we were unable to recover it. 00:24:38.424 [2024-07-24 20:52:33.872441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.424 [2024-07-24 20:52:33.872467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.872593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.872619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.872768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.872794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.872971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.872996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.873109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.873135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.873262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.873289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.873438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.873463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.873598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.873624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.873719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.873744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.873857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.873882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.873990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.874016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.874126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.874151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.874294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.874320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.874430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.874456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.874609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.874648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.874764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.874791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.874905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.874930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.875033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.875059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.875208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.875234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.875355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.875380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.875485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.875511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.875654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.875679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.875785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.875810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.875914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.875940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.876046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.876071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.876177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.876202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.876322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.876348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.876449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.876474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.876598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.876624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.876723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.876749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.876872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.876899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.877011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.877046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.877181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.877206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.877324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.877350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.877457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.877483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.877623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.877652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.877750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.877775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.877927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.877953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.425 [2024-07-24 20:52:33.878062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.425 [2024-07-24 20:52:33.878089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.425 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.878185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.878211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.878353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.878393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.878504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.878548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.878712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.878739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.878846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.878873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.878986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.879022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.879155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.879181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.879311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.879339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.879465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.879491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.879640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.879667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.879771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.879796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.879903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.879929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.880028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.880053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.880201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.880231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.880366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.880393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.880501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.880526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.880641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.880666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.880763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.880788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.880928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.880954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.881058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.881086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.881197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.881223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.881390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.881431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.881559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.881587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.881721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.881746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.881867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.881895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.882013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.882038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.882175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.882200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.882322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.882349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.882463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.882489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.882634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.882673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.882819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.426 [2024-07-24 20:52:33.882846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.426 qpair failed and we were unable to recover it. 00:24:38.426 [2024-07-24 20:52:33.882951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.882979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.883144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.883170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.883301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.883328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.883433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.883458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.883562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.883596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.883695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.883721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.883832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.883859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.883974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.884012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.884127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.884152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.884270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.884319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.884424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.884450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.884602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.884641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.884753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.884777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.884910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.884935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.885045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.885071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.885229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.885278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.885430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.885458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.885570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.885596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.885716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.885744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.885846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.885874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.885987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.886125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.886254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.886382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.886518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.886674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.886809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.886940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.886972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.887085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.887110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.887219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.887253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.887363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.887388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.887502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.887527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.887633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.887659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.887775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.887801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.887943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.887969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.888074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.888099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.888213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.888250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.427 qpair failed and we were unable to recover it. 00:24:38.427 [2024-07-24 20:52:33.888367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.427 [2024-07-24 20:52:33.888393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.888550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.888600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.888730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.888763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.888874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.888908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.889042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.889171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.889317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.889449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.889591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.889729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.889858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.889993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.890034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.890167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.890194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.890330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.890359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.890462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.890487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.890596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.890622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.890719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.890744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.890891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.890917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.891013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.891038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.891169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.891194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.891311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.891338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.891450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.891480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.891596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.891623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.891748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.891773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.891888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.891914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.892033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.892059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.892171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.892200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.892348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.892388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.892521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.892548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.892664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.892691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.892805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.892831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.892974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.893000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.893116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.893151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.893255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.893282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.893422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.893449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.893593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.893620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.893758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.893783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.893903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.428 [2024-07-24 20:52:33.893938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.428 qpair failed and we were unable to recover it. 00:24:38.428 [2024-07-24 20:52:33.894049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.894074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.894202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.894229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.894385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.894411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.894533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.894575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.894735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.894762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.894900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.894925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.895028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.895053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.895189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.895214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.895330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.895355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.895458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.895483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.895591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.895623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.895730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.895756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.895896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.895921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.896030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.896055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.896190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.896229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.896413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.896440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.896570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.896596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.896734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.896761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.896886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.896911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.897019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.897045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.897164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.897189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.897323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.897349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.897451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.897477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.897583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.897609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.897750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.897775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.897888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.897913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.898008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.898034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.898130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.898155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.898294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.898320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.898434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.898459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.898604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.898634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.898742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.898767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.898905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.898930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.899030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.899057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.899187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.899227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.899371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.899399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.899508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.899546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.899698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.429 [2024-07-24 20:52:33.899725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.429 qpair failed and we were unable to recover it. 00:24:38.429 [2024-07-24 20:52:33.899866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.899891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.900005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.900031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.900140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.900166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.900292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.900319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.900443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.900470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.900586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.900612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.900754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.900780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.900889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.900915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.901043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.901068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.901185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.901225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.901361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.901388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.901527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.901553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.901667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.901693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.901831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.901856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.901991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.902016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.902139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.902167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.902291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.902318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.902422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.902448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.902556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.902584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.902706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.902732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.902856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.902894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.903018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.903044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.903177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.903202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.903324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.903351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.903481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.903507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.903644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.903669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.903788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.903814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.903927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.903953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.904056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.904086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.904196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.904223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.904342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.904369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.904475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.904501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.904620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.904652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.904789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.904816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.904914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.904940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.430 qpair failed and we were unable to recover it. 00:24:38.430 [2024-07-24 20:52:33.905082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.430 [2024-07-24 20:52:33.905108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.905215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.905248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.905366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.905392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.905490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.905515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.905646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.905672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.905777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.905803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.905924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.905960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.906078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.906104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.906240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.906278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.906383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.906408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.906510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.906535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.906682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.906708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.906855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.906882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.906995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.907022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.907144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.907183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.907348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.907376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.907491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.907517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.907625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.907651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.907793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.907821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.907956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.907981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.908087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.908113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.908252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.908278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.908383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.908409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.908508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.908534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.908634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.908664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.908773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.908807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.908919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.908947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.909083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.909122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.909271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.909299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.909414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.909441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.909551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.909578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.909711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.909737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.909846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.909872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.909982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.910010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.910126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.910153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.910263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.910291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.431 qpair failed and we were unable to recover it. 00:24:38.431 [2024-07-24 20:52:33.910395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-24 20:52:33.910421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.910524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.910551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.910715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.910741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.910871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.910897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.911029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.911055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.911157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.911182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.911289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.911316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.911422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.911448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.911558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.911583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.911684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.911710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.911864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.911890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.912047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.912073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.912184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.912212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.912331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.912359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.912475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.912500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.912601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.912630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.912761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.912786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.912890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.912915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.913019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.913045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.913178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.913217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.913338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.913364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.913470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.913496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.913602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.913628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.913736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.913785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.913892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.913919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.914018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.914044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.914161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.914187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.914327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.914353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.914458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.914483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.914606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.914631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.914730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.914756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.914880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.914905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.915020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.915045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.915161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.915196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.915335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.915374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.915488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.915515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.915634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.915660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.915772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.432 [2024-07-24 20:52:33.915797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.432 qpair failed and we were unable to recover it. 00:24:38.432 [2024-07-24 20:52:33.915915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.433 [2024-07-24 20:52:33.915942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.433 qpair failed and we were unable to recover it. 00:24:38.433 [2024-07-24 20:52:33.916049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.433 [2024-07-24 20:52:33.916081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.433 qpair failed and we were unable to recover it. 00:24:38.433 [2024-07-24 20:52:33.916231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.433 [2024-07-24 20:52:33.916273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.433 qpair failed and we were unable to recover it. 00:24:38.433 [2024-07-24 20:52:33.916376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.433 [2024-07-24 20:52:33.916401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.433 qpair failed and we were unable to recover it. 00:24:38.433 [2024-07-24 20:52:33.916511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.433 [2024-07-24 20:52:33.916551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.433 qpair failed and we were unable to recover it. 00:24:38.433 [2024-07-24 20:52:33.916660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.433 [2024-07-24 20:52:33.916685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.433 qpair failed and we were unable to recover it. 00:24:38.433 [2024-07-24 20:52:33.916823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.433 [2024-07-24 20:52:33.916848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.433 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.916955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.916980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.917124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.917149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.917294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.917320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.917426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.917451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.917562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.917587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.917740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.917765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.917869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.917894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.918002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.918038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.918158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.918182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.918295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.918321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.918437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.918462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.918573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.918598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.918706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.918732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.918882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.918917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.919054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.919079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.705 [2024-07-24 20:52:33.919187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.705 [2024-07-24 20:52:33.919213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.705 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.919327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.919352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.919451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.919476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.919587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.919612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.919714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.919739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.919846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.919881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.919998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.920033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.920142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.920167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.920280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.920306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.920440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.920470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.920597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.920622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.920766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.920791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.920896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.920921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.921029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.921054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.921189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.921214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.921327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.921354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.921483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.921508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.921625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.921651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.921779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.921803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.921943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.921969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.922074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.922099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.922220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.922269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.922380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.922405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.922517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.922542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.922687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.922712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.922811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.922836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.922938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.922964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.923103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.923128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.923257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.923283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.923393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.923522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.923548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.923645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.923671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.923772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.923797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.923928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.923958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.924071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.924096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.924220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.924252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.924365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.924390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.924510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.706 [2024-07-24 20:52:33.924547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.706 qpair failed and we were unable to recover it. 00:24:38.706 [2024-07-24 20:52:33.924648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.924673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.924817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.924842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.924944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.924970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.925100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.925126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.925238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.925270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.925372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.925397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.925505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.925541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.925647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.925673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.925795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.925821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.925935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.925961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.926077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.926103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.926206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.926231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.926370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.926400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.926504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.926529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.926654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.926679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.926815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.926841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.926948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.927094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.927119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.927263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.927289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.927396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.927421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.927519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.927545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.927651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.927676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.927770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.927795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.927908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.927934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.928062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.928087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.928196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.928221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.928355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.928381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.928484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.928509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.928629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.928654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.928756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.928782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.928914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.928939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.929041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.929066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.929174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.929200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.929324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.929349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.929449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.929475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.929582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.929607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.929717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.929743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.929896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.707 [2024-07-24 20:52:33.929921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.707 qpair failed and we were unable to recover it. 00:24:38.707 [2024-07-24 20:52:33.930030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.930055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.930168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.930198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.930340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.930367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.930490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.930515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.930637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.930663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.930758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.930784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.930922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.930950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.931059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.931085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.931189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.931214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.931363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.931405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.931522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.931560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.931672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.931698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.931848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.931874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.931985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.932012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.932151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.932177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.932313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.932341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.932438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.932464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.932617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.932642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.932746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.932771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.932878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.932903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.933970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.933995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.934101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.934127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.934254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.934281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.934413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.934439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.934550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.934576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.934699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.934724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.934849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.934874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.934989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.935014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.935152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.935177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.935309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.708 [2024-07-24 20:52:33.935335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.708 qpair failed and we were unable to recover it. 00:24:38.708 [2024-07-24 20:52:33.935466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.935491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.935605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.935630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.935737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.935762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.935868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.935894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.936036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.936061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.936219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.936273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.936411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.936438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.936549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.936579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.936678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.936704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.936850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.936874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.936989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.937014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.937121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.937146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.937291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.937318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.937448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.937474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.937616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.937640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.937741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.937767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.937876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.937901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.938012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.938039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.938177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.938215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.938344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.938370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.938499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.938525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.938639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.938664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.938809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.938836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.938970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.938997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.939100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.939126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.939257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.939283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.939399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.939424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.939555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.939581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.939695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.939720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.939832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.939857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.939974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.939999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.940128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.940153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.940264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.940291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.940390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.940414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.940523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.940548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.940690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.940725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.940863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.709 [2024-07-24 20:52:33.940888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.709 qpair failed and we were unable to recover it. 00:24:38.709 [2024-07-24 20:52:33.941022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.941046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.941154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.941179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.941302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.941328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.941460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.941484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.941596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.941621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.941734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.941758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.941858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.941883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.941991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.942016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.942158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.942183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.942300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.942327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.942440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.942466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.942636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.942661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.942773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.942798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.942928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.942953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.943067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.943092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.943193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.943217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.943389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.943415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.943520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.943547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.943674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.943699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.943823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.943848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.943957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.943982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.944124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.944149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.944319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.944346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.944456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.944481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.944593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.944620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.944720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.944745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.944891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.945060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.945086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.945229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.945270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.945372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.945398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.945507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.710 [2024-07-24 20:52:33.945534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.710 qpair failed and we were unable to recover it. 00:24:38.710 [2024-07-24 20:52:33.945633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.945657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.945785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.945811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.945918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.945944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.946077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.946101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.946223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.946256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.946368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.946393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.946503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.946528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.946671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.946697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.946832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.946866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.947039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.947065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.947204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.947229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.947375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.947400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.947505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.947530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.947638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.947665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.947803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.947829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.947927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.947951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.948065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.948091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.948224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.948266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.948377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.948402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.949590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.949642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.949818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.949846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.949972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.949998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.950131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.950156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.950329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.950355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.950494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.950520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.950624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.950649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.950816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.950842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.950979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.951003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.951135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.951160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.951269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.951295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.951405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.951431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.951575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.951601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.951709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.951734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.951840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.951866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.951995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.952021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.952135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.952161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.952281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.952307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.952409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.711 [2024-07-24 20:52:33.952434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.711 qpair failed and we were unable to recover it. 00:24:38.711 [2024-07-24 20:52:33.952542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.952574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.952698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.952724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.952828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.952854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.952958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.952984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.953091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.953118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.953221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.953251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.953400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.953426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.953528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.953552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.953683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.953709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.953834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.953859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.953962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.953987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.954100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.954124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.954256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.954281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.954391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.954416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.954520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.954553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.954696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.954722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.954828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.954853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.954989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.955014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.955150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.955176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.955313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.955358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.955503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.955529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.955627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.955652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.955765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.955792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.955895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.955921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.956921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.956946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.957047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.957072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.957202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.957227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.957338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.957367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.957466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.957492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.957635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.957662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.957773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.712 [2024-07-24 20:52:33.957798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.712 qpair failed and we were unable to recover it. 00:24:38.712 [2024-07-24 20:52:33.957907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.957933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.958061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.958087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.958229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.958263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.958372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.958397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.958506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.958531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.958631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.958656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.958787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.958812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.958922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.958947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.959081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.959111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.959283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.959309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.959419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.959443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.959543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.959569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.959694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.959719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.959817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.959842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.959948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.959972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.960109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.960135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.960248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.960275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.960415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.960441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.960544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.960569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.960686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.960710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.960842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.960867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.960993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.961017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.961178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.961203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.961320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.961348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.961453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.961478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.961599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.961624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.961759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.961784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.961882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.961907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.962034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.962166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.962302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.962453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.962575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.962704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.962858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.962986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.963015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.963154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.963179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.963305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.963332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.713 qpair failed and we were unable to recover it. 00:24:38.713 [2024-07-24 20:52:33.963434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.713 [2024-07-24 20:52:33.963459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.963561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.963586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.963693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.963717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.963826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.963850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.964034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.964161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.964287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.964420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.964569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.964690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.964850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.964986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.965011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.965125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.965151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.965304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.965329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.965463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.965489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.965610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.965636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.965742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.965767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.965911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.965936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.966062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.966187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.966343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.966477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.966620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.966751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.966887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.966988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.967146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.967290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.967449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.967581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.967706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.967831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.967955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.714 [2024-07-24 20:52:33.967980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.714 qpair failed and we were unable to recover it. 00:24:38.714 [2024-07-24 20:52:33.968138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.968178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.968300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.968328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.968463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.968488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.968600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.968625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.968730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.968755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.968900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.968925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.969039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.969065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.969175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.969200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.969311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.969337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.969437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.969462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.969590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.969615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.969750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.969775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.969879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.969905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.970008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.970034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.970165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.970191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.970307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.970333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.970444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.970469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.970603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.970628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.970742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.970770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.970877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.970902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.971032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.971195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.971338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.971470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.971604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.971741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.971868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.971975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.972002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.972132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.972157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.972297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.972322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.972425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.972452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.972555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.972579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.972686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.972711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.972821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.972847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.972988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.973013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.973122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.973147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.973255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.715 [2024-07-24 20:52:33.973281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.715 qpair failed and we were unable to recover it. 00:24:38.715 [2024-07-24 20:52:33.973379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.973405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.973498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.973524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.973656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.973681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.973793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.973818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.973948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.973974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.974081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.974108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.974213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.974238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.974354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.974380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.974494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.974523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.974639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.974665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.974798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.974823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.974927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.974953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.975059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.975085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.975210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.975235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.975351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.975376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.975475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.975500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.975633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.975657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.975764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.975789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.975918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.975943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.976037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.976062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.976194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.976219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.976328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.976353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.976472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.976497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.976637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.976662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.976768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.976793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.976898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.976923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.977048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.977073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.977173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.977198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.977304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.977330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.977432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.977457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.977591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.977615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.977725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.977750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.977856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.977882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.978015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.978040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.978138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.978163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.978270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.978295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.978411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.978435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.978578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.716 [2024-07-24 20:52:33.978602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.716 qpair failed and we were unable to recover it. 00:24:38.716 [2024-07-24 20:52:33.978704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.978728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.978834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.978859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.978962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.978987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.979093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.979117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.979265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.979290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.979387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.979413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.979520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.979553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.979659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.979685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.979792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.979818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.979931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.979956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.980116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.980141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.980247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.980272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.980379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.980404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.980518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.980543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.980651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.980676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.980777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.980802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.980899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.980924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.981062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.981198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.981339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.981497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.981627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.981754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.981884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.981983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.982128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.982256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.982395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.982519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.982643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.982777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.982900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.982925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.983022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.983047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.983193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.983233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.983355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.983382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.983520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.983547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.983646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.983672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.983776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.983801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.717 qpair failed and we were unable to recover it. 00:24:38.717 [2024-07-24 20:52:33.983916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.717 [2024-07-24 20:52:33.983947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.984078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.984103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.984216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.984249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.984385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.984410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.984513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.984539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.984651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.984676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.984786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.984810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.984914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.984941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.985040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.985065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.985199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.985226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.985334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.985359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.985481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.985506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.985620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.985645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.985765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.985791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.985891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.985916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.986051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.986183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.986328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.986457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.986593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.986717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.986854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.986987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.987013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.987121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.987146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.987251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.987277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.987379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.987404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.987509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.987534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.987670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.987699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.987811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.987836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.988090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.988116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.988227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.988257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.988389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.988415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.988516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.988547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.988645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.988669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.988775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.988800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.988911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.988935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 [2024-07-24 20:52:33.989038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.989063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.718 [2024-07-24 20:52:33.989159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.718 [2024-07-24 20:52:33.989184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.718 qpair failed and we were unable to recover it. 00:24:38.718 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:38.719 [2024-07-24 20:52:33.989318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.989343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.989463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.719 [2024-07-24 20:52:33.989488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.989593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.719 [2024-07-24 20:52:33.989618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 20:52:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.719 [2024-07-24 20:52:33.989729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.989754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.989857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.989882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.989987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.990132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.990278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.990404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.990536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.990673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.990800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.990936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.990961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.991075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.991100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.991238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.991283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.991398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.991423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.991524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.991556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.991659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.991685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.991801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.991830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.991964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.991990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.992098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.992124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.992229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.992264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.992373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.992402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.992510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.992535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.992639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.992664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.992773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.992799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.992931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.992956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.993062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.993087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.993226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.993263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.993363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.993389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.993489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.993514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.719 [2024-07-24 20:52:33.993620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.719 [2024-07-24 20:52:33.993645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.719 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.993787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.993813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.993921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.993946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.994044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.994069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.994187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.994212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.994317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.994343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.994482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.994508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.994621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.994647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.994758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.994784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.994895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.994920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.995019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.995044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.995155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.995181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.995287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.995314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.995453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.995479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.995584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.995610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.995713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.995739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.995861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.995901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.996018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.996045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.996178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.996204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.996308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.996335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.996447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.996472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.996589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.996624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.996763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.996788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.996894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.996919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.997020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.997049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.997148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.997173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.997290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.997316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.997423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.997450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.997600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.997626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.997729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.997754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.997851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.997876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.998974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.720 [2024-07-24 20:52:33.998999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.720 qpair failed and we were unable to recover it. 00:24:38.720 [2024-07-24 20:52:33.999100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:33.999125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:33.999235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:33.999267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:33.999370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:33.999394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:33.999501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:33.999527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:33.999633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:33.999658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:33.999786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:33.999811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:33.999915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:33.999940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.000082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.000107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.000238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.000268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.000372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.000399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.000504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.000529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.000664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.000690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.000835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.000886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.001046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.001072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.001204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.001230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.001351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.001377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.001483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.001509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.001657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.001706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.001893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.001920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.002062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.002087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.002217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.002250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.002357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.002383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.002511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.002536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.002671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.002697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.002803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.002827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.002946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.002975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.003109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.003135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.003234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.003266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.003397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.003423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.003541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.003566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.003716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.003741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.003873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.003898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.004046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.004081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.004181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.004213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.004333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.004359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.004467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.004493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.004645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.004671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.004814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.721 [2024-07-24 20:52:34.004839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.721 qpair failed and we were unable to recover it. 00:24:38.721 [2024-07-24 20:52:34.004944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.004974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.005082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.005109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.005250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.005277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.005384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.005410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.005513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.005539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.005650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.005675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.005776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.005802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.005933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.005960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.006097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.006121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.006216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.006253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.006369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.006394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.006505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.006530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.722 [2024-07-24 20:52:34.006682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.006708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:38.722 [2024-07-24 20:52:34.006841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.722 [2024-07-24 20:52:34.006883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.722 [2024-07-24 20:52:34.007032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.007060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.007175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.007201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.007356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.007383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.007490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.007515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.007670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.007695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.007802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.007828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.007933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.007958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.008058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.008084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.008217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.008249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.008363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.008388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.008495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.008520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.008646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.008675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.008822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.008846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.008959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.008985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.009093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.009118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.009219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.009250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.009353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.009379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.009488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.009514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.009643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.009669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.009767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.009792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.009899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.009925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.722 [2024-07-24 20:52:34.010032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.722 [2024-07-24 20:52:34.010056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.722 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.010196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.010222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.010341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.010371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.010476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.010506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.010646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.010672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.010768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.010793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.010934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.010960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.011077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.011103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.011213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.011251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.011361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.011386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.011490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.011515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.011623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.011649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.011753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.011779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.011897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.011923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.012026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.012051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.012182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.012222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.012368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.012395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.012531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.012556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.012688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.012714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.012815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.012840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.012986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.013119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.013256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.013421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.013555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.013684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.013811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.013967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.013993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.014103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.014128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.014261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.014301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.014419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.014450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.014598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.014624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.014729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.014754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.014856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.014882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.014992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.015017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.015130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.015156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.015280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.015307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.015416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.015443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.723 [2024-07-24 20:52:34.015585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.723 [2024-07-24 20:52:34.015610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.723 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.015752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.015777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.015893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.015919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.016067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.016093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.016259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.016286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.016421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.016447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.016586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.016611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.016743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.016769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.016895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.016921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.017021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.017047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.017180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.017219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.017350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.017377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.017495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.017522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.017649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.017675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.017784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.017809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.017910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.017935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.018048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.018075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.018210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.018236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.018350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.018377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.018484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.018511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.018631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.018658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.018767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.018792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.018928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.018955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.019066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.019093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.019199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.019226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.019347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.019372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.019479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.019505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.019617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.019643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.019753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.019779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.019890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.019915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.020030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.020056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.020173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.020211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.020346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.020374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.724 [2024-07-24 20:52:34.020494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.724 [2024-07-24 20:52:34.020521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.724 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.020662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.020688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.020826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.020854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.020989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.021156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.021306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.021432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.021571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.021699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.021829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.021965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.021991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.022092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.022117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.022250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.022276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.022385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.022413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.022542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.022568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.022684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.022710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.022824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.022850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.022964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.022990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.023094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.023121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.023222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.023255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.023398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.023423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.023542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.023567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.023671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.023696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.023832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.023857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.023965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.023990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.024101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.024126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.024223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.024254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.024359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.024385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.024493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.024518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.024656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.024681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.024789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.024814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.024923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.024950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.025065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.025092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.025194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.025220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.025331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.025357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.025494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.025519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.025654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.025679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.725 qpair failed and we were unable to recover it. 00:24:38.725 [2024-07-24 20:52:34.025775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.725 [2024-07-24 20:52:34.025800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.025900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.025925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.026050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.026090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.026249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.026283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.026398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.026424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.026531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.026556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.026675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.026700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.026806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.026832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.026946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.026972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.027081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.027106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.027212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.027249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.027356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.027381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.027482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.027507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.027618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.027643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.027778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.027806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.027938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.027962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.028081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.028106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.028253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.028280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.028388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.028413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.028516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.028551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.028681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.028705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.028818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.028844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.028954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.028980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.029136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.029163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.029300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.029339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.029452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.029479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.029595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.029621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.029727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.029753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.029849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.029874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.029985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.030012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.030151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.030178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.030301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.030327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.030462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.030487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 Malloc0 00:24:38.726 [2024-07-24 20:52:34.030599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.030624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.030732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.030757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 [2024-07-24 20:52:34.030870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.030895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.726 [2024-07-24 20:52:34.030998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.726 [2024-07-24 20:52:34.031024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.726 qpair failed and we were unable to recover it. 00:24:38.726 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:38.726 [2024-07-24 20:52:34.031125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.031151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.727 [2024-07-24 20:52:34.031279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.031308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.727 [2024-07-24 20:52:34.031412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.031437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.031565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.031590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.031691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.031715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.031819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.031844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.031942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.031967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.032074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.032098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.032207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.032239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.032362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.032388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.032498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.032524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.032689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.032715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.032815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.032841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.032956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.032984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.033093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.033122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.033226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.033259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.033362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.033387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.033494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.033519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.033642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.033682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.033799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.033826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.033932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.033958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.034056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.034082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.034212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.034238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.034296] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.727 [2024-07-24 20:52:34.034377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.034402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.034506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.034532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.034644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.034669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.034768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.034793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.034909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.034935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.035051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.035081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.035197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.035234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.035350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.035378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.035493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.035520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.035635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.035660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.035790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.035815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.035923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.035948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.036051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.036076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.036182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.727 [2024-07-24 20:52:34.036209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.727 qpair failed and we were unable to recover it. 00:24:38.727 [2024-07-24 20:52:34.036327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.036353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.036459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.036484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.036600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.036625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.036725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.036750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.036857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.036883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.036984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.037009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.037113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.037138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.037235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.037278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.037409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.037435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.037575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.037600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.037732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.037757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.037867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.037895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.038004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.038029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.038183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.038221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.038357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.038384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.038531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.038556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.038669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.038697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.038806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.038833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.038994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.039020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.039128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.039155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.039278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.039305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.039428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.039454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.039618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.039644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.039746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.039771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.039906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.039932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.040058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.040097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.040225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.040258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.040375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.040401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.040507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.040533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.040675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.040701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.040849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.040874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.040983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.041007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.728 [2024-07-24 20:52:34.041115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.728 [2024-07-24 20:52:34.041140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.728 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.041245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.041271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.041378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.041409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.041511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.041542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.041683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.041708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.041809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.041834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.041938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.041962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.042078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.042117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.042227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.042260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.042363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.042388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.042500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.042525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.729 [2024-07-24 20:52:34.042635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.042660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.729 [2024-07-24 20:52:34.042774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.042799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.729 [2024-07-24 20:52:34.042909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.042936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.729 [2024-07-24 20:52:34.043052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.043090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.043226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.043261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.043369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.043395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.043497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.043523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.043625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.043650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.043748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.043773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.043895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.043920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.044060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.044085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.044204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.044232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.044351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.044376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.044477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.044502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.044638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.044663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.044774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.044798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.044908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.044939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.045099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.045124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.045226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.045259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.045369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.045394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.045501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.045526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.045644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.045669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.045803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.045828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.045936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.045961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.046087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.046125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.046256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.046295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.046418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.046445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.046548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.729 [2024-07-24 20:52:34.046574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.729 qpair failed and we were unable to recover it. 00:24:38.729 [2024-07-24 20:52:34.046675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.046700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.046807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.046832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.046941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.046968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.047075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.047101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.047209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.047237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.047360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.047386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.047495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.047520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.047626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.047650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.047755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.047780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.047917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.047943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.048043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.048068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.048166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.048192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.048312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.048337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.048442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.048468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.048608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.048633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.048745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.048771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.048880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.048906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.049025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.049051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.049154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.049179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.049290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.049316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.049425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.049451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.049589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.049614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.049745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.049771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.049876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.049913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.050021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.050048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.050180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.050205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.050353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.050378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.050492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.050518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.730 [2024-07-24 20:52:34.050650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.050675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.730 [2024-07-24 20:52:34.050792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.050820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.730 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.730 [2024-07-24 20:52:34.050960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.050985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.051107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.051132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.051252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.051277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.051391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.051416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.051525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.051551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.051698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.051723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.051850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.051876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.051993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.052018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.052118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.730 [2024-07-24 20:52:34.052142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.730 qpair failed and we were unable to recover it. 00:24:38.730 [2024-07-24 20:52:34.052254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.052281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.052411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.052438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.052542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.052568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.052678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.052705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.052810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.052835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.052973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.052999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.053136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.053162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.053276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.053302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.053407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.053433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.053533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.053566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.053678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.053705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.053843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.053869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.054012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.054038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.054153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.054177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.054319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.054358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.054486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.054524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.054638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.054665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.054789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.054816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.054953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.054979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.055088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.055113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.055211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.055237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.055375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.055400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.055514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.055541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.055651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.055677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.055795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.055820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.055941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.055980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.056111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.056150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.056268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.056296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.056416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.056444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.056554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.056581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.056680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.056705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.056813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.056841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.056941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.056967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.057070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.057097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.057200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.057225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.057346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.057372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.057483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.057509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.057626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.057651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.057757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.057782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.057909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.057935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.731 [2024-07-24 20:52:34.058051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.731 [2024-07-24 20:52:34.058080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.731 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.058192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.058223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.058397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.058434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.058538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.058564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.732 [2024-07-24 20:52:34.058678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.058704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.732 [2024-07-24 20:52:34.058797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.058823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.732 [2024-07-24 20:52:34.058935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.058960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.732 [2024-07-24 20:52:34.059095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.059120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.059230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.059266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.059380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.059408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.059515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.059542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.059654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.059679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.059784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.059809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.059930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.059955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.060062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.060089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.060206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.060231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.060363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.060389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.060491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.060516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.060621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.060647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.060771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.060796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.060923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.060949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.061057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.061085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.061202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.061248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.061382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.061410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.061513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.061539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.061682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.061708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fc0000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.061819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.061854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.061959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.061984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x672250 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.062130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.062170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.062282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.062317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.062424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.732 [2024-07-24 20:52:34.062450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4fb8000b90 with addr=10.0.0.2, port=4420 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 [2024-07-24 20:52:34.062595] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.732 [2024-07-24 20:52:34.065080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.732 [2024-07-24 20:52:34.065259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.732 [2024-07-24 20:52:34.065286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.732 [2024-07-24 20:52:34.065308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.732 [2024-07-24 20:52:34.065321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.732 [2024-07-24 20:52:34.065357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.732 qpair failed and we were unable to recover it. 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.732 20:52:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1695481 00:24:38.732 [2024-07-24 20:52:34.074930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.732 [2024-07-24 20:52:34.075065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.732 [2024-07-24 20:52:34.075092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.732 [2024-07-24 20:52:34.075107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.732 [2024-07-24 20:52:34.075120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.732 [2024-07-24 20:52:34.075155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.084915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.085025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.085052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.085066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.085079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.085108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.094981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.095125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.095151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.095166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.095179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.095208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.104886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.104993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.105020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.105034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.105047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.105075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.114889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.115009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.115036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.115050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.115063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.115093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.124902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.125008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.125039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.125054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.125066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.125096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.134922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.135033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.135059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.135073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.135086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.135116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.145019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.145165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.145190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.145205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.145217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.145253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.155063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.155171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.155197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.155211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.155224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.155263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.165067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.165165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.165192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.165206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.165219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.165260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.175034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.175140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.175166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.175180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.175193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.175221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.185111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.185214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.185240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.185263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.185275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.185305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.195131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.195250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.195277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.195291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.733 [2024-07-24 20:52:34.195303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.733 [2024-07-24 20:52:34.195332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.733 qpair failed and we were unable to recover it. 00:24:38.733 [2024-07-24 20:52:34.205146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.733 [2024-07-24 20:52:34.205256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.733 [2024-07-24 20:52:34.205282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.733 [2024-07-24 20:52:34.205297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.734 [2024-07-24 20:52:34.205309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.734 [2024-07-24 20:52:34.205338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.734 qpair failed and we were unable to recover it. 00:24:38.734 [2024-07-24 20:52:34.215300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.734 [2024-07-24 20:52:34.215421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.734 [2024-07-24 20:52:34.215447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.734 [2024-07-24 20:52:34.215461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.734 [2024-07-24 20:52:34.215474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.734 [2024-07-24 20:52:34.215503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.734 qpair failed and we were unable to recover it. 00:24:38.734 [2024-07-24 20:52:34.225269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.734 [2024-07-24 20:52:34.225375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.734 [2024-07-24 20:52:34.225401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.734 [2024-07-24 20:52:34.225415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.734 [2024-07-24 20:52:34.225428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.734 [2024-07-24 20:52:34.225457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.734 qpair failed and we were unable to recover it. 00:24:38.734 [2024-07-24 20:52:34.235248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.734 [2024-07-24 20:52:34.235368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.734 [2024-07-24 20:52:34.235393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.734 [2024-07-24 20:52:34.235407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.734 [2024-07-24 20:52:34.235420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.734 [2024-07-24 20:52:34.235449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.734 qpair failed and we were unable to recover it. 00:24:38.734 [2024-07-24 20:52:34.245281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.734 [2024-07-24 20:52:34.245390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.734 [2024-07-24 20:52:34.245416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.734 [2024-07-24 20:52:34.245431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.734 [2024-07-24 20:52:34.245446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.734 [2024-07-24 20:52:34.245476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.734 qpair failed and we were unable to recover it. 00:24:38.734 [2024-07-24 20:52:34.255299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.734 [2024-07-24 20:52:34.255432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.734 [2024-07-24 20:52:34.255459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.734 [2024-07-24 20:52:34.255477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.734 [2024-07-24 20:52:34.255495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.734 [2024-07-24 20:52:34.255549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.734 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.265365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.265487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.265515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.265530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.265543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.265574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.275384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.275486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.275512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.275526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.275539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.275568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.285389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.285493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.285519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.285534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.285546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.285575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.295397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.295509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.295534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.295548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.295561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.295590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.305440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.305546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.305573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.305587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.305600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.305630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.315459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.315565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.315591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.315605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.315618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.315646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.325470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.325573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.325599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.325613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.325625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.325655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.335599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.335709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.335736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.335750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.335763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.335792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.345563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.345675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.345700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.345721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.345734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.345763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.355584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.355691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.355717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.355731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.355744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.355774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.993 [2024-07-24 20:52:34.365633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.993 [2024-07-24 20:52:34.365750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.993 [2024-07-24 20:52:34.365776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.993 [2024-07-24 20:52:34.365790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.993 [2024-07-24 20:52:34.365803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.993 [2024-07-24 20:52:34.365833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.993 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.375631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.375760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.375785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.375799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.375811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.375841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.385745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.385851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.385877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.385891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.385904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.385933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.395683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.395828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.395854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.395868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.395881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.395910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.405743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.405871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.405897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.405911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.405923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.405953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.415760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.415879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.415904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.415918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.415931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.415960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.425778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.425882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.425907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.425921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.425934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.425963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.435812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.435948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.435974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.435996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.436010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.436040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.445859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.445961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.445987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.446001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.446014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.446043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.455911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.456023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.456049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.456063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.456075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.456104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.465895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.466004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.466030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.466044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.466056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.466087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.476006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.476109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.476134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.476149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.476161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.476190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.486013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.486120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.486145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.486159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.486172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.486201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.496037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.496157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.994 [2024-07-24 20:52:34.496182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.994 [2024-07-24 20:52:34.496196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.994 [2024-07-24 20:52:34.496209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.994 [2024-07-24 20:52:34.496238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.994 qpair failed and we were unable to recover it. 00:24:38.994 [2024-07-24 20:52:34.506110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.994 [2024-07-24 20:52:34.506220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.995 [2024-07-24 20:52:34.506253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.995 [2024-07-24 20:52:34.506270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.995 [2024-07-24 20:52:34.506283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.995 [2024-07-24 20:52:34.506312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.995 qpair failed and we were unable to recover it. 00:24:38.995 [2024-07-24 20:52:34.516039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.995 [2024-07-24 20:52:34.516141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.995 [2024-07-24 20:52:34.516167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.995 [2024-07-24 20:52:34.516181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.995 [2024-07-24 20:52:34.516194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.995 [2024-07-24 20:52:34.516222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.995 qpair failed and we were unable to recover it. 00:24:38.995 [2024-07-24 20:52:34.526141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.995 [2024-07-24 20:52:34.526302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.995 [2024-07-24 20:52:34.526333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.995 [2024-07-24 20:52:34.526348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.995 [2024-07-24 20:52:34.526361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.995 [2024-07-24 20:52:34.526390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.995 qpair failed and we were unable to recover it. 00:24:38.995 [2024-07-24 20:52:34.536115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.995 [2024-07-24 20:52:34.536255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.995 [2024-07-24 20:52:34.536282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.995 [2024-07-24 20:52:34.536298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.995 [2024-07-24 20:52:34.536314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.995 [2024-07-24 20:52:34.536344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.995 qpair failed and we were unable to recover it. 00:24:38.995 [2024-07-24 20:52:34.546275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.995 [2024-07-24 20:52:34.546388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.995 [2024-07-24 20:52:34.546414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.995 [2024-07-24 20:52:34.546428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.995 [2024-07-24 20:52:34.546440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.995 [2024-07-24 20:52:34.546469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.995 qpair failed and we were unable to recover it. 00:24:38.995 [2024-07-24 20:52:34.556174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:38.995 [2024-07-24 20:52:34.556303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:38.995 [2024-07-24 20:52:34.556351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:38.995 [2024-07-24 20:52:34.556374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:38.995 [2024-07-24 20:52:34.556387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:38.995 [2024-07-24 20:52:34.556418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.995 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.566185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.566309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.566336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.566350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.566363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.566399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.254 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.576225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.576354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.576380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.576394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.576407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.576437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.254 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.586240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.586348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.586374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.586389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.586403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.586433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.254 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.596281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.596390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.596416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.596431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.596443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.596473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.254 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.606319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.606433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.606460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.606475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.606490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.606520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.254 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.616374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.616486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.616517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.616532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.616545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.616574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.254 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.626371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.626481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.626507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.626522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.626533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.626562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.254 qpair failed and we were unable to recover it. 00:24:39.254 [2024-07-24 20:52:34.636412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.254 [2024-07-24 20:52:34.636521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.254 [2024-07-24 20:52:34.636547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.254 [2024-07-24 20:52:34.636561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.254 [2024-07-24 20:52:34.636574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.254 [2024-07-24 20:52:34.636604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.646425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.646526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.646552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.646566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.646579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.646608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.656467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.656592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.656618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.656633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.656651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.656682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.666512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.666628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.666652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.666666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.666677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.666706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.676563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.676685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.676710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.676725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.676738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.676766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.686559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.686658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.686683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.686697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.686710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.686738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.696562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.696676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.696701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.696715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.696728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.696758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.706572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.706690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.706716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.706730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.706742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.706772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.716615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.716724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.716749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.716763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.716775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.716805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.726701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.726802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.726828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.726842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.726855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.726884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.736729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.736849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.736874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.736889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.736901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.736932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.746730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.746840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.746866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.746887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.746902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.746931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.756724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.756826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.756852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.756866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.756878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.756908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.766786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.255 [2024-07-24 20:52:34.766898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.255 [2024-07-24 20:52:34.766924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.255 [2024-07-24 20:52:34.766938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.255 [2024-07-24 20:52:34.766950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.255 [2024-07-24 20:52:34.766979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.255 qpair failed and we were unable to recover it. 00:24:39.255 [2024-07-24 20:52:34.776905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.256 [2024-07-24 20:52:34.777033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.256 [2024-07-24 20:52:34.777059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.256 [2024-07-24 20:52:34.777073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.256 [2024-07-24 20:52:34.777086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.256 [2024-07-24 20:52:34.777115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.256 qpair failed and we were unable to recover it. 00:24:39.256 [2024-07-24 20:52:34.786844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.256 [2024-07-24 20:52:34.786974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.256 [2024-07-24 20:52:34.786999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.256 [2024-07-24 20:52:34.787014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.256 [2024-07-24 20:52:34.787026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.256 [2024-07-24 20:52:34.787054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.256 qpair failed and we were unable to recover it. 00:24:39.256 [2024-07-24 20:52:34.796850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.256 [2024-07-24 20:52:34.796957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.256 [2024-07-24 20:52:34.796982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.256 [2024-07-24 20:52:34.796996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.256 [2024-07-24 20:52:34.797009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.256 [2024-07-24 20:52:34.797037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.256 qpair failed and we were unable to recover it. 00:24:39.256 [2024-07-24 20:52:34.806877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.256 [2024-07-24 20:52:34.806981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.256 [2024-07-24 20:52:34.807007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.256 [2024-07-24 20:52:34.807021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.256 [2024-07-24 20:52:34.807033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.256 [2024-07-24 20:52:34.807062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.256 qpair failed and we were unable to recover it. 00:24:39.256 [2024-07-24 20:52:34.816921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.256 [2024-07-24 20:52:34.817044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.256 [2024-07-24 20:52:34.817070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.256 [2024-07-24 20:52:34.817085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.256 [2024-07-24 20:52:34.817097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.256 [2024-07-24 20:52:34.817127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.256 qpair failed and we were unable to recover it. 00:24:39.515 [2024-07-24 20:52:34.826938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.515 [2024-07-24 20:52:34.827046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.515 [2024-07-24 20:52:34.827073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.515 [2024-07-24 20:52:34.827087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.515 [2024-07-24 20:52:34.827100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.515 [2024-07-24 20:52:34.827129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.515 qpair failed and we were unable to recover it. 00:24:39.515 [2024-07-24 20:52:34.836978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.515 [2024-07-24 20:52:34.837087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.515 [2024-07-24 20:52:34.837113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.515 [2024-07-24 20:52:34.837134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.515 [2024-07-24 20:52:34.837147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.515 [2024-07-24 20:52:34.837176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.515 qpair failed and we were unable to recover it. 00:24:39.515 [2024-07-24 20:52:34.846981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.515 [2024-07-24 20:52:34.847085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.515 [2024-07-24 20:52:34.847110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.515 [2024-07-24 20:52:34.847125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.515 [2024-07-24 20:52:34.847137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.515 [2024-07-24 20:52:34.847167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.515 qpair failed and we were unable to recover it. 00:24:39.515 [2024-07-24 20:52:34.857105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.515 [2024-07-24 20:52:34.857215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.515 [2024-07-24 20:52:34.857240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.515 [2024-07-24 20:52:34.857264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.515 [2024-07-24 20:52:34.857277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.515 [2024-07-24 20:52:34.857307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.515 qpair failed and we were unable to recover it. 00:24:39.515 [2024-07-24 20:52:34.867209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.515 [2024-07-24 20:52:34.867342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.515 [2024-07-24 20:52:34.867368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.515 [2024-07-24 20:52:34.867382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.515 [2024-07-24 20:52:34.867395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.515 [2024-07-24 20:52:34.867425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.515 qpair failed and we were unable to recover it. 00:24:39.515 [2024-07-24 20:52:34.877115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.515 [2024-07-24 20:52:34.877249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.515 [2024-07-24 20:52:34.877275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.877289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.877302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.877331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.887130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.887229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.887262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.887277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.887290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.887319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.897167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.897282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.897308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.897322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.897335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.897365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.907163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.907295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.907321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.907335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.907347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.907376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.917171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.917282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.917307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.917322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.917334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.917363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.927212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.927334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.927364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.927379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.927392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.927421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.937252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.937362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.937388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.937402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.937415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.937445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.947278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.947400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.947427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.947442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.947458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.947490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.957274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.957381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.957407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.957422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.957435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.957464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.967322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.967425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.967451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.967466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.967480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.967515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.977352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.977495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.977522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.977537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.977549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.977579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.987361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.987467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.987492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.987507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.987520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.987548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:34.997401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:34.997505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:34.997531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:34.997545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:34.997558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.516 [2024-07-24 20:52:34.997587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.516 qpair failed and we were unable to recover it. 00:24:39.516 [2024-07-24 20:52:35.007448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.516 [2024-07-24 20:52:35.007570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.516 [2024-07-24 20:52:35.007597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.516 [2024-07-24 20:52:35.007611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.516 [2024-07-24 20:52:35.007624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.007653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.517 [2024-07-24 20:52:35.017475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.517 [2024-07-24 20:52:35.017601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.517 [2024-07-24 20:52:35.017632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.517 [2024-07-24 20:52:35.017648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.517 [2024-07-24 20:52:35.017662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.017692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.517 [2024-07-24 20:52:35.027483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.517 [2024-07-24 20:52:35.027589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.517 [2024-07-24 20:52:35.027615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.517 [2024-07-24 20:52:35.027629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.517 [2024-07-24 20:52:35.027642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.027670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.517 [2024-07-24 20:52:35.037569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.517 [2024-07-24 20:52:35.037687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.517 [2024-07-24 20:52:35.037713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.517 [2024-07-24 20:52:35.037727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.517 [2024-07-24 20:52:35.037740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.037769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.517 [2024-07-24 20:52:35.047640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.517 [2024-07-24 20:52:35.047757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.517 [2024-07-24 20:52:35.047784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.517 [2024-07-24 20:52:35.047799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.517 [2024-07-24 20:52:35.047815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.047845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.517 [2024-07-24 20:52:35.057634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.517 [2024-07-24 20:52:35.057777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.517 [2024-07-24 20:52:35.057804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.517 [2024-07-24 20:52:35.057818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.517 [2024-07-24 20:52:35.057837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.057868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.517 [2024-07-24 20:52:35.067590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.517 [2024-07-24 20:52:35.067725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.517 [2024-07-24 20:52:35.067752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.517 [2024-07-24 20:52:35.067767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.517 [2024-07-24 20:52:35.067780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.067810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.517 [2024-07-24 20:52:35.077626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.517 [2024-07-24 20:52:35.077741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.517 [2024-07-24 20:52:35.077768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.517 [2024-07-24 20:52:35.077783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.517 [2024-07-24 20:52:35.077797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.517 [2024-07-24 20:52:35.077827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.517 qpair failed and we were unable to recover it. 00:24:39.776 [2024-07-24 20:52:35.087694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.776 [2024-07-24 20:52:35.087805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.776 [2024-07-24 20:52:35.087831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.776 [2024-07-24 20:52:35.087846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.776 [2024-07-24 20:52:35.087858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.776 [2024-07-24 20:52:35.087889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.776 qpair failed and we were unable to recover it. 00:24:39.776 [2024-07-24 20:52:35.097697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.097810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.097837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.097851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.097864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.097893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.107725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.107836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.107862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.107876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.107889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.107918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.117770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.117879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.117904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.117918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.117931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.117961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.127797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.127897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.127923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.127937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.127950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.127979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.137851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.137968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.137993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.138007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.138020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.138050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.147872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.148033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.148059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.148073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.148091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.148123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.157866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.157997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.158022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.158036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.158049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.158078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.167913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.168022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.168048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.168062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.168075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.168105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.178022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.178135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.178160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.178175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.178188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.178217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.188039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.188149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.188175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.188189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.188202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.188231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.198003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.198107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.198132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.198147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.198159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.198188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.207987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.208091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.208117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.208132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.208144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.208173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.218059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.218184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.777 [2024-07-24 20:52:35.218210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.777 [2024-07-24 20:52:35.218224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.777 [2024-07-24 20:52:35.218236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.777 [2024-07-24 20:52:35.218275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.777 qpair failed and we were unable to recover it. 00:24:39.777 [2024-07-24 20:52:35.228100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.777 [2024-07-24 20:52:35.228207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.228236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.228264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.228279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.228309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.238088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.238195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.238221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.238249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.238266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.238297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.248111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.248236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.248270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.248285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.248297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.248326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.258173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.258295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.258323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.258338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.258351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.258382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.268177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.268289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.268316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.268330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.268342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.268373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.278236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.278352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.278377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.278392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.278404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.278433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.288273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.288378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.288404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.288418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.288430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.288459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.298278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.298388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.298413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.298428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.298440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.298469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.308316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.308441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.308466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.308480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.308493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.308523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.318355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.318460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.318485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.318499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.318512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.318541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.328374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.328503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.328534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.328553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.328567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.328597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:39.778 [2024-07-24 20:52:35.338426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:39.778 [2024-07-24 20:52:35.338585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:39.778 [2024-07-24 20:52:35.338611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:39.778 [2024-07-24 20:52:35.338625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:39.778 [2024-07-24 20:52:35.338638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:39.778 [2024-07-24 20:52:35.338669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.778 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.348485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.348614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.348640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.348654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.348667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:40.037 [2024-07-24 20:52:35.348696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.358497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.358617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.358643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.358657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.358670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:40.037 [2024-07-24 20:52:35.358699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.368568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.368697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.368728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.368744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.368757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.368794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.378624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.378779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.378807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.378822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.378835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.378864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.388563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.388677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.388703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.388718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.388731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.388760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.398592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.398711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.398738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.398753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.398766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.398795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.408614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.408727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.408754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.408768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.408781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.408810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.418690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.418804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.418835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.418851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.418864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.418894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.428656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.428762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.428789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.428803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.428816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.428845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.438693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.438823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.438849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.438864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.438877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.438906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.448750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.448863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.037 [2024-07-24 20:52:35.448890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.037 [2024-07-24 20:52:35.448904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.037 [2024-07-24 20:52:35.448917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.037 [2024-07-24 20:52:35.448946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.037 qpair failed and we were unable to recover it. 00:24:40.037 [2024-07-24 20:52:35.458776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.037 [2024-07-24 20:52:35.458894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.458921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.458935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.458953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.458985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.468818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.468928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.468953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.468967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.468980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.469008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.478831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.478933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.478961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.478975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.478988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.479030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.488808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.488915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.488941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.488956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.488969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.488998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.498877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.499022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.499049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.499064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.499077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.499105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.508866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.508978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.509005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.509019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.509032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.509061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.518899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.519018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.519044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.519059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.519072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.519101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.528950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.529073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.529100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.529114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.529126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.529156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.538955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.539068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.539094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.539108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.539121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.539150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.549034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.549152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.549179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.549194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.549212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.549254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.559012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.559124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.559151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.559166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.559179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.559208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.569039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.569149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.569176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.569190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.569205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.569236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.579070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.579175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.579200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.579214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.038 [2024-07-24 20:52:35.579227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.038 [2024-07-24 20:52:35.579270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.038 qpair failed and we were unable to recover it. 00:24:40.038 [2024-07-24 20:52:35.589107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.038 [2024-07-24 20:52:35.589224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.038 [2024-07-24 20:52:35.589257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.038 [2024-07-24 20:52:35.589273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.039 [2024-07-24 20:52:35.589286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.039 [2024-07-24 20:52:35.589315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.039 qpair failed and we were unable to recover it. 00:24:40.039 [2024-07-24 20:52:35.599148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.039 [2024-07-24 20:52:35.599275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.039 [2024-07-24 20:52:35.599303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.039 [2024-07-24 20:52:35.599319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.039 [2024-07-24 20:52:35.599343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.039 [2024-07-24 20:52:35.599402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.039 qpair failed and we were unable to recover it. 00:24:40.297 [2024-07-24 20:52:35.609205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.609336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.609364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.609379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.609391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.609423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.619210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.619330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.619356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.619371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.619384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.619413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.629211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.629338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.629364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.629378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.629389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.629419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.639319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.639473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.639499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.639522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.639536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.639567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.649298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.649404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.649430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.649445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.649457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.649499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.659308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.659424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.659451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.659465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.659478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.659507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.669325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.669434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.669458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.669472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.669484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.669513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.679362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.679481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.679507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.679522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.679534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.679563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.689411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.689541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.689566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.689580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.689593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.689622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.699441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.699560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.699585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.699599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.699612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.699640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.709438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.709540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.709566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.709580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.709593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.709622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.719464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.719564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.719589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.719604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.719617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.719645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.729494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.298 [2024-07-24 20:52:35.729600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.298 [2024-07-24 20:52:35.729631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.298 [2024-07-24 20:52:35.729646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.298 [2024-07-24 20:52:35.729658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.298 [2024-07-24 20:52:35.729687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.298 qpair failed and we were unable to recover it. 00:24:40.298 [2024-07-24 20:52:35.739525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.739670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.739695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.739709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.739721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.739750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.749546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.749668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.749693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.749707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.749720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.749748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.759560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.759664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.759691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.759705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.759718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.759748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.769581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.769689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.769715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.769729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.769741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.769776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.779622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.779732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.779758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.779772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.779784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.779813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.789663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.789774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.789799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.789813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.789826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.789855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.799667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.799771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.799797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.799811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.799824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.799852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.809779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.809882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.809908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.809922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.809935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.809963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.819719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.819822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.819853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.819868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.819880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.819909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.829801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.829910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.829934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.829949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.829961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.829990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.839851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.839968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.839994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.840008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.840020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.840049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.849813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.849918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.849944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.849957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.849969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.849999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.299 [2024-07-24 20:52:35.859897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.299 [2024-07-24 20:52:35.860008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.299 [2024-07-24 20:52:35.860035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.299 [2024-07-24 20:52:35.860050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.299 [2024-07-24 20:52:35.860063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.299 [2024-07-24 20:52:35.860100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.299 qpair failed and we were unable to recover it. 00:24:40.558 [2024-07-24 20:52:35.869886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.558 [2024-07-24 20:52:35.869997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.558 [2024-07-24 20:52:35.870025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.558 [2024-07-24 20:52:35.870040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.558 [2024-07-24 20:52:35.870053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.558 [2024-07-24 20:52:35.870082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.558 qpair failed and we were unable to recover it. 00:24:40.558 [2024-07-24 20:52:35.879953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.558 [2024-07-24 20:52:35.880059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.558 [2024-07-24 20:52:35.880087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.558 [2024-07-24 20:52:35.880101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.558 [2024-07-24 20:52:35.880114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.880144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.889946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.890048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.890074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.890088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.890101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.890129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.899980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.900092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.900117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.900131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.900144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.900174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.910004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.910127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.910153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.910168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.910180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.910209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.920035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.920150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.920177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.920191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.920204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.920233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.930051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.930150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.930175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.930189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.930202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.930231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.940208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.940350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.940375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.940389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.940402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.940432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.950126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.950258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.950285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.950299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.950321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.950351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.960131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.960235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.960269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.960284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.960296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.960326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.970176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.970291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.970317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.970331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.970344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.970373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.980198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.980334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.980360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.980375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.980387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.980416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:35.990239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:35.990361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:35.990387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:35.990401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:35.990414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:35.990443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:36.000274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:36.000389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:36.000415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:36.000429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:36.000442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:36.000471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:36.010300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.559 [2024-07-24 20:52:36.010444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.559 [2024-07-24 20:52:36.010469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.559 [2024-07-24 20:52:36.010483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.559 [2024-07-24 20:52:36.010496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.559 [2024-07-24 20:52:36.010525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.559 qpair failed and we were unable to recover it. 00:24:40.559 [2024-07-24 20:52:36.020332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.020455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.020480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.020494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.020507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.020536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.030350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.030505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.030531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.030545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.030557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.030586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.040381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.040491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.040517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.040538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.040552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.040582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.050502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.050608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.050633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.050647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.050661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.050689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.060473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.060585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.060611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.060624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.060637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.060680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.070487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.070622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.070647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.070661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.070675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.070705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.080528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.080653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.080679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.080693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.080706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.080735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.090534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.090643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.090670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.090684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.090697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.090726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.100606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.100720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.100746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.100761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.100774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.100802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.110572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.110701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.110726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.110740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.110752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.110782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.560 [2024-07-24 20:52:36.120591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.560 [2024-07-24 20:52:36.120718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.560 [2024-07-24 20:52:36.120745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.560 [2024-07-24 20:52:36.120760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.560 [2024-07-24 20:52:36.120773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.560 [2024-07-24 20:52:36.120806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.560 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.130646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.130764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.130793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.130814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.130828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.130872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.140695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.140804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.140831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.140845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.140858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.140890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.150687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.150795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.150822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.150836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.150848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.150877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.160778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.160902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.160930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.160945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.160958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.160989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.170770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.170884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.170912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.170928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.170941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.170970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.180880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.180996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.181023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.181037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.181050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.181079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.190839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.190956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.190983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.190997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.191010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.191040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.200906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.201013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.201039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.201054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.201067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.201096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.210871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.210974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.210999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.211013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.211026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.211055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.220910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.221022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.221054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.221069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.221082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.221111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.230936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.231045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.231070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.231084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.231097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.231126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.240961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.820 [2024-07-24 20:52:36.241106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.820 [2024-07-24 20:52:36.241132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.820 [2024-07-24 20:52:36.241146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.820 [2024-07-24 20:52:36.241159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.820 [2024-07-24 20:52:36.241187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.820 qpair failed and we were unable to recover it. 00:24:40.820 [2024-07-24 20:52:36.251061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.251167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.251195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.251210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.251223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.251260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.261047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.261164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.261190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.261204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.261217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.261259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.271081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.271209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.271235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.271257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.271271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.271300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.281157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.281269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.281295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.281309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.281322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.281352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.291132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.291262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.291288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.291302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.291315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.291344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.301153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.301267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.301298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.301312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.301325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.301368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.311157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.311272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.311303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.311318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.311330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.311360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.321178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.321307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.321333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.321347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.321360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.321389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.331197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.331308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.331334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.331348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.331360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.331402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.341238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.341373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.341398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.341412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.341425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.341453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.351278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.351379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.351404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.351418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.351438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.351468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.361323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.361474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.361500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.361515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.821 [2024-07-24 20:52:36.361528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.821 [2024-07-24 20:52:36.361559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.821 qpair failed and we were unable to recover it. 00:24:40.821 [2024-07-24 20:52:36.371336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.821 [2024-07-24 20:52:36.371483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.821 [2024-07-24 20:52:36.371509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.821 [2024-07-24 20:52:36.371524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.822 [2024-07-24 20:52:36.371537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.822 [2024-07-24 20:52:36.371567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.822 qpair failed and we were unable to recover it. 00:24:40.822 [2024-07-24 20:52:36.381383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:40.822 [2024-07-24 20:52:36.381508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:40.822 [2024-07-24 20:52:36.381543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:40.822 [2024-07-24 20:52:36.381568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:40.822 [2024-07-24 20:52:36.381589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:40.822 [2024-07-24 20:52:36.381621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.822 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.391408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.391529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.391557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.391571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.391584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.391614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.401414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.401533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.401560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.401574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.401587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.401616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.411442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.411586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.411612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.411626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.411639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.411667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.421483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.421593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.421619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.421633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.421645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.421673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.431489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.431648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.431673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.431688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.431701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.431730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.441526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.441635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.441661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.441681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.441695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.441725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.451559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.451677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.451703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.451717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.451730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.451758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.461585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.461694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.461720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.461735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.461748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.461777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.471598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.471702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.471727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.081 [2024-07-24 20:52:36.471741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.081 [2024-07-24 20:52:36.471754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.081 [2024-07-24 20:52:36.471783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.081 qpair failed and we were unable to recover it. 00:24:41.081 [2024-07-24 20:52:36.481621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.081 [2024-07-24 20:52:36.481722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.081 [2024-07-24 20:52:36.481747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.481761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.481774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.481803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.491674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.491809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.491835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.491851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.491864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.491894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.501713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.501829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.501855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.501869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.501881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.501911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.511743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.511849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.511874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.511888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.511901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.511930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.521739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.521892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.521917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.521932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.521945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.521975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.531798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.531901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.531927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.531947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.531960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.531989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.541810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.541921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.541946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.541960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.541972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.542000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.551834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.551958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.551984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.551998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.552010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.552040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.561863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.561972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.561998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.562013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.562025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.562054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.571878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.571979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.572005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.572018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.572031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.572059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.581957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.582116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.582141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.582155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.582168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.582196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.591951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.592073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.592098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.592112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.592124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.082 [2024-07-24 20:52:36.592154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.082 qpair failed and we were unable to recover it. 00:24:41.082 [2024-07-24 20:52:36.602007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.082 [2024-07-24 20:52:36.602124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.082 [2024-07-24 20:52:36.602150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.082 [2024-07-24 20:52:36.602164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.082 [2024-07-24 20:52:36.602177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.083 [2024-07-24 20:52:36.602206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.083 qpair failed and we were unable to recover it. 00:24:41.083 [2024-07-24 20:52:36.611995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.083 [2024-07-24 20:52:36.612114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.083 [2024-07-24 20:52:36.612141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.083 [2024-07-24 20:52:36.612156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.083 [2024-07-24 20:52:36.612169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.083 [2024-07-24 20:52:36.612198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.083 qpair failed and we were unable to recover it. 00:24:41.083 [2024-07-24 20:52:36.622024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.083 [2024-07-24 20:52:36.622144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.083 [2024-07-24 20:52:36.622174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.083 [2024-07-24 20:52:36.622190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.083 [2024-07-24 20:52:36.622202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.083 [2024-07-24 20:52:36.622231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.083 qpair failed and we were unable to recover it. 00:24:41.083 [2024-07-24 20:52:36.632156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.083 [2024-07-24 20:52:36.632274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.083 [2024-07-24 20:52:36.632300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.083 [2024-07-24 20:52:36.632314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.083 [2024-07-24 20:52:36.632326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.083 [2024-07-24 20:52:36.632354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.083 qpair failed and we were unable to recover it. 00:24:41.083 [2024-07-24 20:52:36.642081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.083 [2024-07-24 20:52:36.642179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.083 [2024-07-24 20:52:36.642204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.083 [2024-07-24 20:52:36.642218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.083 [2024-07-24 20:52:36.642231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.083 [2024-07-24 20:52:36.642267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.083 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.652124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.652311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.652338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.652353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.652369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.652399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.662231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.662350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.662376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.662391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.662404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.662439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.672185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.672302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.672327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.672341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.672352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.672381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.682255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.682379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.682406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.682423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.682436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.682465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.692272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.692446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.692472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.692486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.692498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.692542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.702304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.702465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.702492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.702506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.702519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.702548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.712340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.712457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.712488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.712503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.712516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.712545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.722328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.722439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.722468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.342 [2024-07-24 20:52:36.722482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.342 [2024-07-24 20:52:36.722495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.342 [2024-07-24 20:52:36.722524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.342 qpair failed and we were unable to recover it. 00:24:41.342 [2024-07-24 20:52:36.732346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.342 [2024-07-24 20:52:36.732498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.342 [2024-07-24 20:52:36.732524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.732538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.732551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.732580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.742370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.742483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.742509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.742524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.742537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.742566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.752391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.752498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.752524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.752538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.752556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.752586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.762433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.762533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.762560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.762574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.762587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.762616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.772489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.772616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.772642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.772656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.772668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.772697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.782514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.782627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.782652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.782666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.782679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.782708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.792556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.792671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.792696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.792709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.792722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.792750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.802571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.802688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.802714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.802729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.802741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.802770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.812543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.812689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.812714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.812728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.812741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.812769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.822607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.822723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.822748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.822763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.822775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.822804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.832649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.832756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.832781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.832795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.832808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.832836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.842741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.842879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.842904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.842919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.842937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.842967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.852673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.852803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.852828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.343 [2024-07-24 20:52:36.852842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.343 [2024-07-24 20:52:36.852854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.343 [2024-07-24 20:52:36.852883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.343 qpair failed and we were unable to recover it. 00:24:41.343 [2024-07-24 20:52:36.862723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.343 [2024-07-24 20:52:36.862833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.343 [2024-07-24 20:52:36.862859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.344 [2024-07-24 20:52:36.862873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.344 [2024-07-24 20:52:36.862886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.344 [2024-07-24 20:52:36.862917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.344 qpair failed and we were unable to recover it. 00:24:41.344 [2024-07-24 20:52:36.872838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.344 [2024-07-24 20:52:36.872958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.344 [2024-07-24 20:52:36.872984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.344 [2024-07-24 20:52:36.872998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.344 [2024-07-24 20:52:36.873010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.344 [2024-07-24 20:52:36.873039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.344 qpair failed and we were unable to recover it. 00:24:41.344 [2024-07-24 20:52:36.882804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.344 [2024-07-24 20:52:36.882936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.344 [2024-07-24 20:52:36.882962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.344 [2024-07-24 20:52:36.882976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.344 [2024-07-24 20:52:36.882989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.344 [2024-07-24 20:52:36.883018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.344 qpair failed and we were unable to recover it. 00:24:41.344 [2024-07-24 20:52:36.892881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.344 [2024-07-24 20:52:36.892990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.344 [2024-07-24 20:52:36.893016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.344 [2024-07-24 20:52:36.893031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.344 [2024-07-24 20:52:36.893043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.344 [2024-07-24 20:52:36.893083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.344 qpair failed and we were unable to recover it. 00:24:41.344 [2024-07-24 20:52:36.902873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.344 [2024-07-24 20:52:36.902996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.344 [2024-07-24 20:52:36.903022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.344 [2024-07-24 20:52:36.903037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.344 [2024-07-24 20:52:36.903049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.344 [2024-07-24 20:52:36.903078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.344 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.912871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.913015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.913042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.913057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.913070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.913101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.922988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.923126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.923153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.923167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.923180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.923209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.932896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.933004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.933030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.933050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.933064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.933093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.942943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.943111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.943136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.943150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.943163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.943192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.952977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.953084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.953109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.953123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.953135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.953164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.963006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.963114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.963140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.963154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.963167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.963197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.973014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.973119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.973145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.973159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.973171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.973200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.983064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.983174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.983199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.603 [2024-07-24 20:52:36.983213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.603 [2024-07-24 20:52:36.983226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.603 [2024-07-24 20:52:36.983262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.603 qpair failed and we were unable to recover it. 00:24:41.603 [2024-07-24 20:52:36.993070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.603 [2024-07-24 20:52:36.993175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.603 [2024-07-24 20:52:36.993200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:36.993214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:36.993226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:36.993264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.003101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.003219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.003254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.003274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.003289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.003319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.013184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.013289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.013316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.013330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.013342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.013373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.023147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.023263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.023294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.023310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.023323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.023352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.033224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.033345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.033370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.033384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.033397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.033426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.043302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.043414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.043439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.043453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.043466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.043496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.053265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.053420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.053446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.053464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.053477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.053507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.063279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.063391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.063418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.063432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.063444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.063479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.073346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.073472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.073499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.073514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.073530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.073561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.083366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.083485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.083512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.083526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.083538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.083568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.093361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.093466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.093491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.093505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.093518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.093546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.103393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.103497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.103523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.103537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.103549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.103579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.113445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.113555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.113588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.113605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.604 [2024-07-24 20:52:37.113618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.604 [2024-07-24 20:52:37.113659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.604 qpair failed and we were unable to recover it. 00:24:41.604 [2024-07-24 20:52:37.123472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.604 [2024-07-24 20:52:37.123595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.604 [2024-07-24 20:52:37.123621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.604 [2024-07-24 20:52:37.123635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.605 [2024-07-24 20:52:37.123648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.605 [2024-07-24 20:52:37.123676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.605 qpair failed and we were unable to recover it. 00:24:41.605 [2024-07-24 20:52:37.133506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.605 [2024-07-24 20:52:37.133603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.605 [2024-07-24 20:52:37.133628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.605 [2024-07-24 20:52:37.133642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.605 [2024-07-24 20:52:37.133655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.605 [2024-07-24 20:52:37.133684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.605 qpair failed and we were unable to recover it. 00:24:41.605 [2024-07-24 20:52:37.143518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.605 [2024-07-24 20:52:37.143668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.605 [2024-07-24 20:52:37.143694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.605 [2024-07-24 20:52:37.143708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.605 [2024-07-24 20:52:37.143721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.605 [2024-07-24 20:52:37.143750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.605 qpair failed and we were unable to recover it. 00:24:41.605 [2024-07-24 20:52:37.153561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.605 [2024-07-24 20:52:37.153690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.605 [2024-07-24 20:52:37.153715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.605 [2024-07-24 20:52:37.153730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.605 [2024-07-24 20:52:37.153748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.605 [2024-07-24 20:52:37.153778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.605 qpair failed and we were unable to recover it. 00:24:41.605 [2024-07-24 20:52:37.163609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.605 [2024-07-24 20:52:37.163742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.605 [2024-07-24 20:52:37.163768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.605 [2024-07-24 20:52:37.163782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.605 [2024-07-24 20:52:37.163795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.605 [2024-07-24 20:52:37.163824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.605 qpair failed and we were unable to recover it. 00:24:41.864 [2024-07-24 20:52:37.173602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.864 [2024-07-24 20:52:37.173711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.864 [2024-07-24 20:52:37.173738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.864 [2024-07-24 20:52:37.173752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.864 [2024-07-24 20:52:37.173765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.864 [2024-07-24 20:52:37.173799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.864 qpair failed and we were unable to recover it. 00:24:41.864 [2024-07-24 20:52:37.183626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.864 [2024-07-24 20:52:37.183739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.864 [2024-07-24 20:52:37.183765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.864 [2024-07-24 20:52:37.183779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.864 [2024-07-24 20:52:37.183791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.864 [2024-07-24 20:52:37.183821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.864 qpair failed and we were unable to recover it. 00:24:41.864 [2024-07-24 20:52:37.193702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.864 [2024-07-24 20:52:37.193813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.864 [2024-07-24 20:52:37.193838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.864 [2024-07-24 20:52:37.193852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.864 [2024-07-24 20:52:37.193865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.864 [2024-07-24 20:52:37.193895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.864 qpair failed and we were unable to recover it. 00:24:41.864 [2024-07-24 20:52:37.203761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.864 [2024-07-24 20:52:37.203871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.864 [2024-07-24 20:52:37.203898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.864 [2024-07-24 20:52:37.203913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.864 [2024-07-24 20:52:37.203927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.864 [2024-07-24 20:52:37.203958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.864 qpair failed and we were unable to recover it. 00:24:41.864 [2024-07-24 20:52:37.213707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.864 [2024-07-24 20:52:37.213814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.864 [2024-07-24 20:52:37.213840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.864 [2024-07-24 20:52:37.213854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.864 [2024-07-24 20:52:37.213866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.864 [2024-07-24 20:52:37.213896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.864 qpair failed and we were unable to recover it. 00:24:41.864 [2024-07-24 20:52:37.223741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.864 [2024-07-24 20:52:37.223853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.864 [2024-07-24 20:52:37.223879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.864 [2024-07-24 20:52:37.223894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.223906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.223936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.233794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.233908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.233933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.233947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.233959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.233989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.243883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.244022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.244047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.244061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.244079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.244111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.253849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.254006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.254032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.254046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.254058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.254087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.263872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.263990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.264017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.264033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.264046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.264087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.273886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.273991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.274016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.274030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.274043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.274072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.283924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.284052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.284078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.284092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.284105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.284136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.293959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.294082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.294109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.294124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.294137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.294180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.303973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.304105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.304130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.304145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.304158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.304188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.313998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.314152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.314178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.314192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.314204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.314234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.324033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.324177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.324203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.324217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.324230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.324270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.334058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.334170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.334195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.334215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.334230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.334268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.344072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.344186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.344211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.344225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.865 [2024-07-24 20:52:37.344237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.865 [2024-07-24 20:52:37.344275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.865 qpair failed and we were unable to recover it. 00:24:41.865 [2024-07-24 20:52:37.354133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.865 [2024-07-24 20:52:37.354258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.865 [2024-07-24 20:52:37.354283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.865 [2024-07-24 20:52:37.354297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.354309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.354339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:41.866 [2024-07-24 20:52:37.364145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.866 [2024-07-24 20:52:37.364264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.866 [2024-07-24 20:52:37.364291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.866 [2024-07-24 20:52:37.364306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.364320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.364364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:41.866 [2024-07-24 20:52:37.374156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.866 [2024-07-24 20:52:37.374264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.866 [2024-07-24 20:52:37.374290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.866 [2024-07-24 20:52:37.374304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.374317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.374347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:41.866 [2024-07-24 20:52:37.384210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.866 [2024-07-24 20:52:37.384340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.866 [2024-07-24 20:52:37.384366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.866 [2024-07-24 20:52:37.384380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.384392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.384421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:41.866 [2024-07-24 20:52:37.394220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.866 [2024-07-24 20:52:37.394335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.866 [2024-07-24 20:52:37.394360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.866 [2024-07-24 20:52:37.394374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.394386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.394415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:41.866 [2024-07-24 20:52:37.404261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.866 [2024-07-24 20:52:37.404371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.866 [2024-07-24 20:52:37.404397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.866 [2024-07-24 20:52:37.404411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.404433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.404463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:41.866 [2024-07-24 20:52:37.414288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.866 [2024-07-24 20:52:37.414392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.866 [2024-07-24 20:52:37.414417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.866 [2024-07-24 20:52:37.414431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.414443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.414473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:41.866 [2024-07-24 20:52:37.424310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:41.866 [2024-07-24 20:52:37.424417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:41.866 [2024-07-24 20:52:37.424447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:41.866 [2024-07-24 20:52:37.424462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:41.866 [2024-07-24 20:52:37.424474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:41.866 [2024-07-24 20:52:37.424505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.866 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.434348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.434463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.434490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.434505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.434518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.434549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.444377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.444490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.444516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.444531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.444544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.444573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.454416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.454524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.454551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.454565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.454581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.454610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.464446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.464562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.464589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.464604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.464616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.464651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.474567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.474693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.474719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.474733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.474746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.474777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.484495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.484604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.484630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.484645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.484658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.484687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.494516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.494621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.494647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.494661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.494673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.494702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.504624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.504742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.504767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.504781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.504793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.125 [2024-07-24 20:52:37.504823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.125 qpair failed and we were unable to recover it. 00:24:42.125 [2024-07-24 20:52:37.514569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.125 [2024-07-24 20:52:37.514674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.125 [2024-07-24 20:52:37.514704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.125 [2024-07-24 20:52:37.514719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.125 [2024-07-24 20:52:37.514732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.514760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.524596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.524705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.524730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.524744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.524756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.524785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.534636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.534739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.534765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.534779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.534792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.534820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.544695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.544822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.544848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.544863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.544879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.544910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.554813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.554923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.554948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.554962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.554975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.555010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.564737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.564835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.564862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.564876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.564889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.564919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.574753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.574858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.574883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.574898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.574910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.574939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.584815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.584952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.584978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.584992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.585004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.585035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.594836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.594945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.594971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.594985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.595001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.595029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.604851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.604959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.604985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.604999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.605011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.605041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.614850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.614955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.614980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.614994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.615007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.615035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.624926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.625039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.625065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.625079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.625091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.625132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.634971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.635086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.635112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.635126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.635138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.635166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.644954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.126 [2024-07-24 20:52:37.645061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.126 [2024-07-24 20:52:37.645086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.126 [2024-07-24 20:52:37.645100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.126 [2024-07-24 20:52:37.645118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.126 [2024-07-24 20:52:37.645149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.126 qpair failed and we were unable to recover it. 00:24:42.126 [2024-07-24 20:52:37.654956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.127 [2024-07-24 20:52:37.655062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.127 [2024-07-24 20:52:37.655088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.127 [2024-07-24 20:52:37.655102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.127 [2024-07-24 20:52:37.655114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.127 [2024-07-24 20:52:37.655144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.127 qpair failed and we were unable to recover it. 00:24:42.127 [2024-07-24 20:52:37.664993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.127 [2024-07-24 20:52:37.665098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.127 [2024-07-24 20:52:37.665124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.127 [2024-07-24 20:52:37.665138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.127 [2024-07-24 20:52:37.665150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.127 [2024-07-24 20:52:37.665179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.127 qpair failed and we were unable to recover it. 00:24:42.127 [2024-07-24 20:52:37.675029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.127 [2024-07-24 20:52:37.675136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.127 [2024-07-24 20:52:37.675161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.127 [2024-07-24 20:52:37.675174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.127 [2024-07-24 20:52:37.675186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.127 [2024-07-24 20:52:37.675226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.127 qpair failed and we were unable to recover it. 00:24:42.127 [2024-07-24 20:52:37.685052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.127 [2024-07-24 20:52:37.685175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.127 [2024-07-24 20:52:37.685201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.127 [2024-07-24 20:52:37.685215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.127 [2024-07-24 20:52:37.685226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.127 [2024-07-24 20:52:37.685264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.127 qpair failed and we were unable to recover it. 00:24:42.386 [2024-07-24 20:52:37.695084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.386 [2024-07-24 20:52:37.695197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.386 [2024-07-24 20:52:37.695224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.386 [2024-07-24 20:52:37.695238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.386 [2024-07-24 20:52:37.695261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.386 [2024-07-24 20:52:37.695293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.386 qpair failed and we were unable to recover it. 00:24:42.386 [2024-07-24 20:52:37.705152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.386 [2024-07-24 20:52:37.705276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.386 [2024-07-24 20:52:37.705303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.705316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.705329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.705360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.715149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.715302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.715328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.715342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.715355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.715384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.725201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.725322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.725348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.725362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.725376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.725406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.735230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.735346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.735373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.735394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.735407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.735438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.745255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.745410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.745434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.745449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.745461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.745490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.755298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.755403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.755429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.755443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.755456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.755488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.765292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.765418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.765445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.765459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.765472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.765501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.775320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.775429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.775454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.775469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.775481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.775510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.785350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.785455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.785480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.785494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.785506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.785535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.795385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.795492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.795517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.795531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.795544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.795572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.805386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.805495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.805521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.805535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.805548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.805577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.815454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.815554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.815579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.815593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.815605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.815636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.825459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.825563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.825588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.825607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.387 [2024-07-24 20:52:37.825621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.387 [2024-07-24 20:52:37.825651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.387 qpair failed and we were unable to recover it. 00:24:42.387 [2024-07-24 20:52:37.835497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.387 [2024-07-24 20:52:37.835625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.387 [2024-07-24 20:52:37.835650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.387 [2024-07-24 20:52:37.835664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.835677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.835705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.845539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.845654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.845679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.845693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.845705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.845734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.855538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.855637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.855662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.855676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.855689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.855717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.865585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.865698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.865724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.865737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.865749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.865779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.875653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.875771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.875798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.875812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.875829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.875860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.885669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.885807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.885834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.885848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.885860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.885903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.895750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.895861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.895887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.895901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.895914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.895943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.905699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.905824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.905849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.905863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.905876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.905905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.915721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.915840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.915871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.915885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.915900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.915930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.925801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.925908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.925934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.925948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.925961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.925994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.935787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.935896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.935922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.935936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.935949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.935978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.388 [2024-07-24 20:52:37.945862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.388 [2024-07-24 20:52:37.945976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.388 [2024-07-24 20:52:37.946002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.388 [2024-07-24 20:52:37.946016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.388 [2024-07-24 20:52:37.946029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.388 [2024-07-24 20:52:37.946062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.388 qpair failed and we were unable to recover it. 00:24:42.647 [2024-07-24 20:52:37.955864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.647 [2024-07-24 20:52:37.956007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.647 [2024-07-24 20:52:37.956034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.647 [2024-07-24 20:52:37.956048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.647 [2024-07-24 20:52:37.956061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:37.956097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:37.965891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:37.966007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:37.966034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:37.966049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:37.966062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:37.966091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:37.975943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:37.976072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:37.976097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:37.976112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:37.976124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:37.976153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:37.985946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:37.986058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:37.986084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:37.986098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:37.986111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:37.986140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:37.995977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:37.996088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:37.996114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:37.996129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:37.996141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:37.996169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.005991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.006101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.006131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.006146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.006160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.006190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.016013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.016118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.016143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.016157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.016169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.016199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.026047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.026156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.026182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.026196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.026208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.026237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.036062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.036167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.036192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.036206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.036219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.036254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.046095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.046235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.046268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.046283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.046301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.046332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.056115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.056219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.056251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.056268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.056281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.056311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.066194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.066363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.066389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.066404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.066415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.066445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.076168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.076285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.076311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.076325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.076338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.076366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.648 qpair failed and we were unable to recover it. 00:24:42.648 [2024-07-24 20:52:38.086206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.648 [2024-07-24 20:52:38.086319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.648 [2024-07-24 20:52:38.086345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.648 [2024-07-24 20:52:38.086361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.648 [2024-07-24 20:52:38.086374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.648 [2024-07-24 20:52:38.086404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.096254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.096408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.096434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.096450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.096464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.096493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.106266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.106391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.106416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.106430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.106442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.106472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.116295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.116405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.116430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.116444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.116457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.116486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.126347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.126457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.126482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.126497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.126510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.126538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.136478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.136608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.136633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.136655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.136668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.136697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.146413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.146537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.146562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.146576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.146589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.146617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.156411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.156524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.156549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.156564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.156576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.156604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.166470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.166583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.166609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.166623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.166635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.166665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.176475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.176577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.176602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.176616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.176628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.176657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.186493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.186649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.186675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.186690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.186703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.186731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.196618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.196721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.196747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.196761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.196773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.196802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.649 [2024-07-24 20:52:38.206639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.649 [2024-07-24 20:52:38.206746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.649 [2024-07-24 20:52:38.206772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.649 [2024-07-24 20:52:38.206787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.649 [2024-07-24 20:52:38.206799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.649 [2024-07-24 20:52:38.206828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.649 qpair failed and we were unable to recover it. 00:24:42.907 [2024-07-24 20:52:38.216564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.907 [2024-07-24 20:52:38.216669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.907 [2024-07-24 20:52:38.216696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.907 [2024-07-24 20:52:38.216711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.907 [2024-07-24 20:52:38.216723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.907 [2024-07-24 20:52:38.216752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.907 qpair failed and we were unable to recover it. 00:24:42.907 [2024-07-24 20:52:38.226634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.907 [2024-07-24 20:52:38.226747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.907 [2024-07-24 20:52:38.226774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.907 [2024-07-24 20:52:38.226795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.907 [2024-07-24 20:52:38.226809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.907 [2024-07-24 20:52:38.226840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.907 qpair failed and we were unable to recover it. 00:24:42.907 [2024-07-24 20:52:38.236626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.907 [2024-07-24 20:52:38.236741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.907 [2024-07-24 20:52:38.236767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.907 [2024-07-24 20:52:38.236788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.236801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.236831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.246870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.246976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.247003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.247017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.247029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.247058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.256770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.256875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.256901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.256916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.256928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.256956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.266733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.266846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.266872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.266885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.266898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.266930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.276784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.276895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.276922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.276937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.276953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.276984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.286799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.286905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.286931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.286946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.286958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.286987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.296820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.296982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.297010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.297024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.297036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.297067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.306823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.306929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.306954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.306968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.306980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.307010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.317005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.317136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.317167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.317182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.317195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.317225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.327009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.327161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.327188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.327202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.327219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.327258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.336953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.337070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.337096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.337110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.337122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.337152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.347024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.347166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.908 [2024-07-24 20:52:38.347192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.908 [2024-07-24 20:52:38.347206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.908 [2024-07-24 20:52:38.347218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.908 [2024-07-24 20:52:38.347258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.908 qpair failed and we were unable to recover it. 00:24:42.908 [2024-07-24 20:52:38.356997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.908 [2024-07-24 20:52:38.357115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.357141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.357156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.357170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.357205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.367063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.367177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.367205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.367220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.367232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.367273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.377043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.377160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.377186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.377200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.377213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.377249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.387096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.387211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.387237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.387260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.387273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.387304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.397096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.397205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.397230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.397252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.397267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.397295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.407126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.407229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.407267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.407283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.407296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.407326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.417140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.417251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.417277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.417291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.417303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.417332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.427180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.427296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.427322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.427336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.427347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.427377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.437232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.437364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.437390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.437405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.437417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.437447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.447288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.447436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.447462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.447476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.447494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.447526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.457265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.457375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.457402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.457416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.457429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.457458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:42.909 [2024-07-24 20:52:38.467413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:42.909 [2024-07-24 20:52:38.467571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:42.909 [2024-07-24 20:52:38.467597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:42.909 [2024-07-24 20:52:38.467611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:42.909 [2024-07-24 20:52:38.467624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:42.909 [2024-07-24 20:52:38.467653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.909 qpair failed and we were unable to recover it. 00:24:43.168 [2024-07-24 20:52:38.477342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.168 [2024-07-24 20:52:38.477457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.168 [2024-07-24 20:52:38.477484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.168 [2024-07-24 20:52:38.477499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.168 [2024-07-24 20:52:38.477511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.477542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.487459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.487575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.487602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.487617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.487630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.487661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.497418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.497570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.497596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.497610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.497622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.497651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.507439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.507567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.507592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.507607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.507620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.507650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.517557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.517671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.517697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.517711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.517724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.517752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.527477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.527622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.527648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.527662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.527675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.527705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.537517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.537644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.537669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.537684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.537705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.537734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.547520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.547630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.547655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.547669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.547682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.547712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.557589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.557696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.557721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.557735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.557747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.557775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.567599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.567711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.567737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.567751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.567763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.567795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.577620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.577769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.577794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.577808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.577821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.577851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.587650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.587760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.587785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.587799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.587812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.587841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.597680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.597781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.597806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.597821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.597833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.169 [2024-07-24 20:52:38.597862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.169 qpair failed and we were unable to recover it. 00:24:43.169 [2024-07-24 20:52:38.607747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.169 [2024-07-24 20:52:38.607857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.169 [2024-07-24 20:52:38.607882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.169 [2024-07-24 20:52:38.607896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.169 [2024-07-24 20:52:38.607908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.607937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.617718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.617868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.617894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.617908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.617921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.617950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.627762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.627868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.627893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.627913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.627927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.627956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.637830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.637952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.637979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.637994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.638010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.638040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.647855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.647980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.648006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.648020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.648032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.648061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.657875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.657982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.658009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.658023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.658036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.658064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.667939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.668051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.668077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.668091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.668104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.668133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.677928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.678032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.678057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.678070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.678082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.678110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.687949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.688054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.688080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.688095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.688108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.688138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.697974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.698085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.698111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.698125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.698138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.698167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.708016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.708125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.708151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.708165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.708177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.708207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.718056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.718183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.718213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.718228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.718247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.718279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.170 [2024-07-24 20:52:38.728080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.170 [2024-07-24 20:52:38.728183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.170 [2024-07-24 20:52:38.728208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.170 [2024-07-24 20:52:38.728222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.170 [2024-07-24 20:52:38.728234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.170 [2024-07-24 20:52:38.728272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.170 qpair failed and we were unable to recover it. 00:24:43.430 [2024-07-24 20:52:38.738217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.430 [2024-07-24 20:52:38.738348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.430 [2024-07-24 20:52:38.738375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.430 [2024-07-24 20:52:38.738390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.430 [2024-07-24 20:52:38.738403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.430 [2024-07-24 20:52:38.738432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.430 qpair failed and we were unable to recover it. 00:24:43.430 [2024-07-24 20:52:38.748168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.430 [2024-07-24 20:52:38.748288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.430 [2024-07-24 20:52:38.748316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.430 [2024-07-24 20:52:38.748333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.430 [2024-07-24 20:52:38.748347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.430 [2024-07-24 20:52:38.748378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.430 qpair failed and we were unable to recover it. 00:24:43.430 [2024-07-24 20:52:38.758277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.430 [2024-07-24 20:52:38.758392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.430 [2024-07-24 20:52:38.758418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.430 [2024-07-24 20:52:38.758432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.758445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.758480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.768190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.768305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.768332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.768345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.768358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.768389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.778230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.778346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.778372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.778386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.778398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.778428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.788308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.788424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.788450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.788464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.788477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.788510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.798304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.798436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.798462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.798476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.798491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.798520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.808321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.808438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.808470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.808485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.808498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.808526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.818330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.818433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.818459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.818473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.818485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.818514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.828378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.828487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.828513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.828527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.828539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.828585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.838391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.838536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.838563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.838577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.838590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.838618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.848430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.848531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.848557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.848571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.848589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.848619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.858447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.858559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.858585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.431 [2024-07-24 20:52:38.858599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.431 [2024-07-24 20:52:38.858612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.431 [2024-07-24 20:52:38.858640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.431 qpair failed and we were unable to recover it. 00:24:43.431 [2024-07-24 20:52:38.868507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.431 [2024-07-24 20:52:38.868640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.431 [2024-07-24 20:52:38.868666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.868680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.868693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.868722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.878631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.878755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.878780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.878794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.878806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.878835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.888585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.888693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.888720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.888734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.888747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.888789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.898724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.898838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.898864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.898878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.898891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.898920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.908650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.908758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.908783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.908797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.908812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.908841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.918609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.918713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.918738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.918752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.918765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.918794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.928684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.928788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.928813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.928828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.928841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.928870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.938703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.938809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.938835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.938849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.938869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.938899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.948756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.948874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.948900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.948914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.948926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.948955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.958770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.958885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.958910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.958924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.958936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.958965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.968794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.968918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.432 [2024-07-24 20:52:38.968944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.432 [2024-07-24 20:52:38.968959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.432 [2024-07-24 20:52:38.968973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.432 [2024-07-24 20:52:38.969003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.432 qpair failed and we were unable to recover it. 00:24:43.432 [2024-07-24 20:52:38.978810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.432 [2024-07-24 20:52:38.978914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.433 [2024-07-24 20:52:38.978939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.433 [2024-07-24 20:52:38.978953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.433 [2024-07-24 20:52:38.978966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.433 [2024-07-24 20:52:38.978995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.433 qpair failed and we were unable to recover it. 00:24:43.433 [2024-07-24 20:52:38.988847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.433 [2024-07-24 20:52:38.988959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.433 [2024-07-24 20:52:38.988985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.433 [2024-07-24 20:52:38.988999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.433 [2024-07-24 20:52:38.989011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.433 [2024-07-24 20:52:38.989040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.433 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:38.998856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:38.998958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:38.998986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:38.999001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:38.999014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.692 [2024-07-24 20:52:38.999043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.692 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:39.008896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:39.009056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:39.009083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:39.009098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:39.009111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.692 [2024-07-24 20:52:39.009141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.692 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:39.018992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:39.019111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:39.019137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:39.019151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:39.019164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.692 [2024-07-24 20:52:39.019194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.692 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:39.028945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:39.029054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:39.029081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:39.029101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:39.029115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.692 [2024-07-24 20:52:39.029147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.692 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:39.038972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:39.039082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:39.039109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:39.039123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:39.039137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.692 [2024-07-24 20:52:39.039169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.692 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:39.049019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:39.049137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:39.049163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:39.049178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:39.049190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.692 [2024-07-24 20:52:39.049219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.692 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:39.059018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:39.059121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:39.059146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:39.059160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:39.059173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.692 [2024-07-24 20:52:39.059202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.692 qpair failed and we were unable to recover it. 00:24:43.692 [2024-07-24 20:52:39.069150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.692 [2024-07-24 20:52:39.069285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.692 [2024-07-24 20:52:39.069311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.692 [2024-07-24 20:52:39.069325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.692 [2024-07-24 20:52:39.069338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.069367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.079070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.079175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.079201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.079215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.079228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.079264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.089101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.089248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.089275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.089289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.089301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.089331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.099144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.099272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.099297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.099311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.099322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.099351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.109172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.109300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.109326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.109340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.109354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.109383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.119185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.119300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.119331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.119346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.119359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.119389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.129228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.129341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.129366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.129380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.129392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.129422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.139255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.139358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.139384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.139398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.139412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.139444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.149301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.149413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.149439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.149453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.149466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.149496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.159307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.159422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.159448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.159462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.159474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.159509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.169329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.169435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.169461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.169475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.169489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.169519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.179449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.179554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.693 [2024-07-24 20:52:39.179579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.693 [2024-07-24 20:52:39.179593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.693 [2024-07-24 20:52:39.179606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.693 [2024-07-24 20:52:39.179635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.693 qpair failed and we were unable to recover it. 00:24:43.693 [2024-07-24 20:52:39.189405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.693 [2024-07-24 20:52:39.189529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.694 [2024-07-24 20:52:39.189555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.694 [2024-07-24 20:52:39.189569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.694 [2024-07-24 20:52:39.189582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.694 [2024-07-24 20:52:39.189611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.694 qpair failed and we were unable to recover it. 00:24:43.694 [2024-07-24 20:52:39.199413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.694 [2024-07-24 20:52:39.199557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.694 [2024-07-24 20:52:39.199582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.694 [2024-07-24 20:52:39.199596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.694 [2024-07-24 20:52:39.199609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.694 [2024-07-24 20:52:39.199637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.694 qpair failed and we were unable to recover it. 00:24:43.694 [2024-07-24 20:52:39.209500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.694 [2024-07-24 20:52:39.209607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.694 [2024-07-24 20:52:39.209637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.694 [2024-07-24 20:52:39.209653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.694 [2024-07-24 20:52:39.209665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.694 [2024-07-24 20:52:39.209696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.694 qpair failed and we were unable to recover it. 00:24:43.694 [2024-07-24 20:52:39.219469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.694 [2024-07-24 20:52:39.219577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.694 [2024-07-24 20:52:39.219603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.694 [2024-07-24 20:52:39.219617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.694 [2024-07-24 20:52:39.219629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.694 [2024-07-24 20:52:39.219657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.694 qpair failed and we were unable to recover it. 00:24:43.694 [2024-07-24 20:52:39.229523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.694 [2024-07-24 20:52:39.229632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.694 [2024-07-24 20:52:39.229657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.694 [2024-07-24 20:52:39.229671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.694 [2024-07-24 20:52:39.229683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.694 [2024-07-24 20:52:39.229712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.694 qpair failed and we were unable to recover it. 00:24:43.694 [2024-07-24 20:52:39.239548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.694 [2024-07-24 20:52:39.239655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.694 [2024-07-24 20:52:39.239680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.694 [2024-07-24 20:52:39.239694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.694 [2024-07-24 20:52:39.239707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.694 [2024-07-24 20:52:39.239737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.694 qpair failed and we were unable to recover it. 00:24:43.694 [2024-07-24 20:52:39.249591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.694 [2024-07-24 20:52:39.249699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.694 [2024-07-24 20:52:39.249725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.694 [2024-07-24 20:52:39.249740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.694 [2024-07-24 20:52:39.249753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.694 [2024-07-24 20:52:39.249790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.694 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.259608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.259725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.259753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.259768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.259781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.259812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.269640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.269758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.269785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.269799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.269812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.269843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.279657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.279762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.279787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.279801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.279814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.279844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.289723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.289847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.289873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.289887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.289900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.289928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.299738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.299843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.299868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.299882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.299895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.299924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.309774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.309884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.309912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.309926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.309939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.309970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.319766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.319869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.319895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.319910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.319923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.319955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.329802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.329913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.329939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.329953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.953 [2024-07-24 20:52:39.329966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.953 [2024-07-24 20:52:39.329996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.953 qpair failed and we were unable to recover it. 00:24:43.953 [2024-07-24 20:52:39.339820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.953 [2024-07-24 20:52:39.339956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.953 [2024-07-24 20:52:39.339982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.953 [2024-07-24 20:52:39.339996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.340014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.340044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.349862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.349972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.349998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.350012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.350024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.350054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.359879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.359989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.360014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.360029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.360041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.360070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.369963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.370107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.370133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.370148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.370160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.370191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.379944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.380054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.380080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.380093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.380106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.380135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.389975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.390086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.390112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.390127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.390139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.390169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.400018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.400122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.400148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.400162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.400175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.400205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.410048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.410160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.410185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.410199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.410212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.410247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.420075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.420195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.420221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.420235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.420256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.420289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.430113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.430221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.430254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.430276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.430289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.430318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.440130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.440254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.440282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.440297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.440309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.440339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.450151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.450272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.450298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.450313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.450325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.450355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.460173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.460294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.460322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.460337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.460353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.460385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.954 [2024-07-24 20:52:39.470224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.954 [2024-07-24 20:52:39.470344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.954 [2024-07-24 20:52:39.470370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.954 [2024-07-24 20:52:39.470384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.954 [2024-07-24 20:52:39.470396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.954 [2024-07-24 20:52:39.470426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.954 qpair failed and we were unable to recover it. 00:24:43.955 [2024-07-24 20:52:39.480239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.955 [2024-07-24 20:52:39.480352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.955 [2024-07-24 20:52:39.480379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.955 [2024-07-24 20:52:39.480394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.955 [2024-07-24 20:52:39.480406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.955 [2024-07-24 20:52:39.480438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.955 qpair failed and we were unable to recover it. 00:24:43.955 [2024-07-24 20:52:39.490261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.955 [2024-07-24 20:52:39.490369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.955 [2024-07-24 20:52:39.490396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.955 [2024-07-24 20:52:39.490411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.955 [2024-07-24 20:52:39.490425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.955 [2024-07-24 20:52:39.490455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.955 qpair failed and we were unable to recover it. 00:24:43.955 [2024-07-24 20:52:39.500297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.955 [2024-07-24 20:52:39.500410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.955 [2024-07-24 20:52:39.500437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.955 [2024-07-24 20:52:39.500451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.955 [2024-07-24 20:52:39.500464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.955 [2024-07-24 20:52:39.500493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.955 qpair failed and we were unable to recover it. 00:24:43.955 [2024-07-24 20:52:39.510409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:43.955 [2024-07-24 20:52:39.510519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:43.955 [2024-07-24 20:52:39.510544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:43.955 [2024-07-24 20:52:39.510558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:43.955 [2024-07-24 20:52:39.510570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:43.955 [2024-07-24 20:52:39.510600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.955 qpair failed and we were unable to recover it. 00:24:44.213 [2024-07-24 20:52:39.520371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.520516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.520544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.520564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.520578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.520621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.530369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.530474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.530501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.530515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.530529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.530559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.540383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.540489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.540515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.540529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.540542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.540571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.550447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.550576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.550602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.550616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.550629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.550657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.560476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.560590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.560615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.560630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.560643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.560673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.570485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.570588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.570614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.570628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.570640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.570670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.580508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.580611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.580637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.580651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.580664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.580692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.590537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.590645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.590670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.590683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.590696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.590725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.600583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.600685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.600712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.600726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.600738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.600767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.610621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.610737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.610768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.610784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.610797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.610826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.620645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.620755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.620781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.620795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.620808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.620838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.630667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.630773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.630798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.630811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.630824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.630853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.640712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.640826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.640852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.214 [2024-07-24 20:52:39.640867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.214 [2024-07-24 20:52:39.640878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.214 [2024-07-24 20:52:39.640907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.214 qpair failed and we were unable to recover it. 00:24:44.214 [2024-07-24 20:52:39.650723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.214 [2024-07-24 20:52:39.650834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.214 [2024-07-24 20:52:39.650860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.650874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.650887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.650922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.660767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.660878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.660904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.660919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.660931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.660960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.670863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.670973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.670998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.671012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.671025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.671054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.680787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.680899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.680923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.680937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.680949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.680977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.690836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.690941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.690966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.690980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.690993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.691022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.700865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.700980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.701010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.701025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.701038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.701066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.710903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.711015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.711040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.711053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.711066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.711094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.721005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.721114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.721139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.721154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.721166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.721194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.730986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.731104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.731129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.731143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.731156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.731185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.740963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.741064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.741090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.741105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.741123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.741152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.751007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.751160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.751184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.751199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.751211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.751250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.761025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.761129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.761154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.761168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.761180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.761210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.215 [2024-07-24 20:52:39.771074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.215 [2024-07-24 20:52:39.771195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.215 [2024-07-24 20:52:39.771220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.215 [2024-07-24 20:52:39.771234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.215 [2024-07-24 20:52:39.771255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.215 [2024-07-24 20:52:39.771286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.215 qpair failed and we were unable to recover it. 00:24:44.473 [2024-07-24 20:52:39.781215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.473 [2024-07-24 20:52:39.781332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.473 [2024-07-24 20:52:39.781360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.473 [2024-07-24 20:52:39.781374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.473 [2024-07-24 20:52:39.781387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.473 [2024-07-24 20:52:39.781417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.473 qpair failed and we were unable to recover it. 00:24:44.473 [2024-07-24 20:52:39.791225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.791377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.791404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.791418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.791431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.791460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.801156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.801270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.801296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.801310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.801323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.801352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.811190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.811317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.811343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.811357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.811370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.811399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.821193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.821306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.821332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.821346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.821358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.821388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.831227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.831348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.831374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.831397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.831411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.831442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.841277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.841384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.841409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.841423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.841435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.841464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.851292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.851404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.851430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.851445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.851457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.851486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.861309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.861459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.861485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.861499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.861512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.861540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.871360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.871470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.871498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.871513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.871525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.871555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.881382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.881500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.881525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.881539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.881552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.881580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.891395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.891505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.891531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.891546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.891558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.891587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.901444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.901572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.901597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.901611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.474 [2024-07-24 20:52:39.901623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.474 [2024-07-24 20:52:39.901653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.474 qpair failed and we were unable to recover it. 00:24:44.474 [2024-07-24 20:52:39.911498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.474 [2024-07-24 20:52:39.911611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.474 [2024-07-24 20:52:39.911636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.474 [2024-07-24 20:52:39.911650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.911663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.475 [2024-07-24 20:52:39.911691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.921526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.921649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.921675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.921695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.921708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.475 [2024-07-24 20:52:39.921738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.931495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.931608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.931635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.931649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.931662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.475 [2024-07-24 20:52:39.931690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.941567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.941674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.941699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.941713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.941726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.475 [2024-07-24 20:52:39.941755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.951658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.951770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.951798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.951813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.951825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc8000b90 00:24:44.475 [2024-07-24 20:52:39.951854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.961612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.961723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.961756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.961774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.961788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc0000b90 00:24:44.475 [2024-07-24 20:52:39.961818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.971699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.971806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.971834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.971849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.971862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fc0000b90 00:24:44.475 [2024-07-24 20:52:39.971905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.981649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.981761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.981793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.981809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.981823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x672250 00:24:44.475 [2024-07-24 20:52:39.981852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:39.991672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:39.991782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:39.991809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:39.991823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:39.991836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x672250 00:24:44.475 [2024-07-24 20:52:39.991864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:40.001715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:40.001851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:40.001890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:40.001915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:40.001936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:44.475 [2024-07-24 20:52:40.001984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:40.011798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:44.475 [2024-07-24 20:52:40.011912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:44.475 [2024-07-24 20:52:40.011949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:44.475 [2024-07-24 20:52:40.011968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:44.475 [2024-07-24 20:52:40.011982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4fb8000b90 00:24:44.475 [2024-07-24 20:52:40.012016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:44.475 qpair failed and we were unable to recover it. 00:24:44.475 [2024-07-24 20:52:40.012119] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:44.475 A controller has encountered a failure and is being reset. 00:24:44.734 Controller properly reset. 00:24:44.734 Initializing NVMe Controllers 00:24:44.734 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:44.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:44.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:44.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:44.734 Initialization complete. Launching workers. 00:24:44.734 Starting thread on core 1 00:24:44.734 Starting thread on core 2 00:24:44.734 Starting thread on core 3 00:24:44.734 Starting thread on core 0 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:44.734 00:24:44.734 real 0m11.554s 00:24:44.734 user 0m21.073s 00:24:44.734 sys 0m5.366s 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.734 ************************************ 00:24:44.734 END TEST nvmf_target_disconnect_tc2 00:24:44.734 ************************************ 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.734 rmmod nvme_tcp 00:24:44.734 rmmod nvme_fabrics 00:24:44.734 rmmod nvme_keyring 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1695888 ']' 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1695888 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1695888 ']' 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1695888 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1695888 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1695888' 00:24:44.734 killing process with pid 1695888 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1695888 00:24:44.734 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1695888 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.992 20:52:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.522 20:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.522 00:24:47.522 real 0m16.405s 00:24:47.522 user 0m47.785s 00:24:47.522 sys 0m7.325s 00:24:47.522 20:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:47.522 20:52:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:47.522 ************************************ 00:24:47.522 END TEST nvmf_target_disconnect 00:24:47.522 ************************************ 00:24:47.522 20:52:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:47.522 00:24:47.522 real 5m4.452s 00:24:47.522 user 10m53.832s 00:24:47.522 sys 1m10.156s 00:24:47.522 20:52:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:47.522 20:52:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.522 ************************************ 00:24:47.522 END TEST nvmf_host 00:24:47.522 ************************************ 00:24:47.522 00:24:47.522 real 19m36.443s 00:24:47.522 user 46m28.627s 00:24:47.522 sys 4m48.860s 00:24:47.522 20:52:42 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:47.522 20:52:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.522 ************************************ 00:24:47.522 END TEST nvmf_tcp 00:24:47.522 ************************************ 00:24:47.522 20:52:42 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:24:47.522 20:52:42 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:47.522 20:52:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:47.522 20:52:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:47.522 20:52:42 -- common/autotest_common.sh@10 -- # set +x 00:24:47.522 ************************************ 00:24:47.522 START TEST spdkcli_nvmf_tcp 00:24:47.522 ************************************ 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:47.522 * Looking for test storage... 00:24:47.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:47.522 20:52:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1697092 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1697092 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1697092 ']' 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.523 20:52:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.523 [2024-07-24 20:52:42.735144] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:24:47.523 [2024-07-24 20:52:42.735221] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697092 ] 00:24:47.523 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.523 [2024-07-24 20:52:42.792921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:47.523 [2024-07-24 20:52:42.902333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.523 [2024-07-24 20:52:42.902337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.523 20:52:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:47.523 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:47.523 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:47.523 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:47.523 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:47.523 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:47.523 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:47.523 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:47.523 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:47.523 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:47.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:47.523 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:47.523 ' 00:24:50.051 [2024-07-24 20:52:45.568674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.423 [2024-07-24 20:52:46.796972] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:53.982 [2024-07-24 20:52:49.080184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:55.878 [2024-07-24 20:52:51.054690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:57.247 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:57.247 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:57.247 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:57.247 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:57.247 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:57.247 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:57.247 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:57.247 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:57.247 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:57.247 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:57.247 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:57.247 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:57.247 20:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.810 20:52:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:57.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:57.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:57.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:57.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:57.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:57.810 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:57.810 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:57.810 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:57.810 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:57.810 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:57.810 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:57.810 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:57.810 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:57.810 ' 00:25:03.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:03.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:03.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:03.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:03.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:03.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:03.067 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:03.067 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:03.067 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:03.067 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:03.067 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:03.067 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:03.067 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:03.067 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1697092 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1697092 ']' 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1697092 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697092 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697092' 00:25:03.067 killing process with pid 1697092 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1697092 00:25:03.067 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1697092 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1697092 ']' 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1697092 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1697092 ']' 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1697092 00:25:03.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1697092) - No such process 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1697092 is not found' 00:25:03.325 Process with pid 1697092 is not found 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:03.325 00:25:03.325 real 0m16.145s 00:25:03.325 user 0m34.187s 00:25:03.325 sys 0m0.771s 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.325 20:52:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.325 ************************************ 00:25:03.325 END TEST spdkcli_nvmf_tcp 00:25:03.325 ************************************ 00:25:03.325 20:52:58 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:03.325 20:52:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:03.326 20:52:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:03.326 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.326 ************************************ 00:25:03.326 START TEST nvmf_identify_passthru 00:25:03.326 ************************************ 00:25:03.326 20:52:58 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:03.326 * Looking for test storage... 00:25:03.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.326 20:52:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.326 20:52:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.326 20:52:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.326 20:52:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.326 20:52:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.326 20:52:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.326 20:52:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.326 20:52:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:03.326 20:52:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.326 20:52:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.326 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.326 20:52:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:03.326 20:52:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.584 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.584 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.584 20:52:58 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.584 20:52:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.500 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:25:05.501 00:25:05.501 --- 10.0.0.2 ping statistics --- 00:25:05.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.501 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:25:05.501 00:25:05.501 --- 10.0.0.1 ping statistics --- 00:25:05.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.501 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:05.501 20:53:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:05.501 20:53:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:05.501 20:53:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:05.501 20:53:00 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:05.501 20:53:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:05.501 20:53:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:05.501 20:53:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:05.501 20:53:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:05.501 20:53:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:05.501 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.677 20:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:09.677 20:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:09.677 20:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:09.677 20:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:09.677 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.856 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:13.856 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:13.856 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:13.856 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1701706 00:25:13.856 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:13.856 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.856 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1701706 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1701706 ']' 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.856 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:14.115 [2024-07-24 20:53:09.440480] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:25:14.115 [2024-07-24 20:53:09.440584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.115 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.115 [2024-07-24 20:53:09.508862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.115 [2024-07-24 20:53:09.618551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.115 [2024-07-24 20:53:09.618607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.115 [2024-07-24 20:53:09.618636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.115 [2024-07-24 20:53:09.618655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.115 [2024-07-24 20:53:09.618665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.115 [2024-07-24 20:53:09.618716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.115 [2024-07-24 20:53:09.618739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.115 [2024-07-24 20:53:09.618813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.115 [2024-07-24 20:53:09.618816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.115 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.115 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:25:14.115 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:14.115 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.115 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:14.115 INFO: Log level set to 20 00:25:14.115 INFO: Requests: 00:25:14.115 { 00:25:14.115 "jsonrpc": "2.0", 00:25:14.115 "method": "nvmf_set_config", 00:25:14.115 "id": 1, 00:25:14.115 "params": { 00:25:14.115 "admin_cmd_passthru": { 00:25:14.115 "identify_ctrlr": true 00:25:14.115 } 00:25:14.115 } 00:25:14.115 } 00:25:14.115 00:25:14.115 INFO: response: 00:25:14.115 { 00:25:14.115 "jsonrpc": "2.0", 00:25:14.115 "id": 1, 00:25:14.115 "result": true 00:25:14.115 } 00:25:14.115 00:25:14.115 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.115 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:14.115 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.115 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:14.115 INFO: Setting log level to 20 00:25:14.115 INFO: Setting log level to 20 00:25:14.115 INFO: Log level set to 20 00:25:14.115 INFO: Log level set to 20 00:25:14.115 INFO: Requests: 00:25:14.115 { 00:25:14.115 "jsonrpc": "2.0", 00:25:14.115 "method": "framework_start_init", 00:25:14.115 "id": 1 00:25:14.115 } 00:25:14.115 00:25:14.115 INFO: Requests: 00:25:14.115 { 00:25:14.115 "jsonrpc": "2.0", 00:25:14.115 "method": "framework_start_init", 00:25:14.115 "id": 1 00:25:14.115 } 00:25:14.115 00:25:14.374 [2024-07-24 20:53:09.774632] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:14.374 INFO: response: 00:25:14.374 { 00:25:14.374 "jsonrpc": "2.0", 00:25:14.374 "id": 1, 00:25:14.374 "result": true 00:25:14.374 } 00:25:14.374 00:25:14.374 INFO: response: 00:25:14.374 { 00:25:14.374 "jsonrpc": "2.0", 00:25:14.374 "id": 1, 00:25:14.374 "result": true 00:25:14.374 } 00:25:14.374 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.374 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:14.374 INFO: Setting log level to 40 00:25:14.374 INFO: Setting log level to 40 00:25:14.374 INFO: Setting log level to 40 00:25:14.374 [2024-07-24 20:53:09.784781] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.374 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:14.374 20:53:09 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.374 20:53:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.651 Nvme0n1 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.651 [2024-07-24 20:53:12.681688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.651 [ 00:25:17.651 { 00:25:17.651 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:17.651 "subtype": "Discovery", 00:25:17.651 "listen_addresses": [], 00:25:17.651 "allow_any_host": true, 00:25:17.651 "hosts": [] 00:25:17.651 }, 00:25:17.651 { 00:25:17.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.651 "subtype": "NVMe", 00:25:17.651 "listen_addresses": [ 00:25:17.651 { 00:25:17.651 "trtype": "TCP", 00:25:17.651 "adrfam": "IPv4", 00:25:17.651 "traddr": "10.0.0.2", 00:25:17.651 "trsvcid": "4420" 00:25:17.651 } 00:25:17.651 ], 00:25:17.651 "allow_any_host": true, 00:25:17.651 "hosts": [], 00:25:17.651 "serial_number": "SPDK00000000000001", 00:25:17.651 "model_number": "SPDK bdev Controller", 00:25:17.651 "max_namespaces": 1, 00:25:17.651 "min_cntlid": 1, 00:25:17.651 "max_cntlid": 65519, 00:25:17.651 "namespaces": [ 00:25:17.651 { 00:25:17.651 "nsid": 1, 00:25:17.651 "bdev_name": "Nvme0n1", 00:25:17.651 "name": "Nvme0n1", 00:25:17.651 "nguid": "E4D2014268C04A09AA8FB80835990781", 00:25:17.651 "uuid": "e4d20142-68c0-4a09-aa8f-b80835990781" 00:25:17.651 } 00:25:17.651 ] 00:25:17.651 } 00:25:17.651 ] 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:17.651 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:17.651 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:17.651 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:17.651 rmmod nvme_tcp 00:25:17.651 rmmod nvme_fabrics 00:25:17.651 rmmod nvme_keyring 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1701706 ']' 00:25:17.651 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1701706 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1701706 ']' 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1701706 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701706 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701706' 00:25:17.651 killing process with pid 1701706 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1701706 00:25:17.651 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1701706 00:25:19.023 20:53:14 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.023 20:53:14 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.023 20:53:14 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.023 20:53:14 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.023 20:53:14 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.023 20:53:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.023 20:53:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:19.023 20:53:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.584 20:53:16 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.584 00:25:21.584 real 0m17.785s 00:25:21.584 user 0m26.100s 00:25:21.584 sys 0m2.174s 00:25:21.584 20:53:16 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.584 20:53:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:21.584 ************************************ 00:25:21.584 END TEST nvmf_identify_passthru 00:25:21.584 ************************************ 00:25:21.584 20:53:16 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:21.585 20:53:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:21.585 20:53:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.585 20:53:16 -- common/autotest_common.sh@10 -- # set +x 00:25:21.585 ************************************ 00:25:21.585 START TEST nvmf_dif 00:25:21.585 ************************************ 00:25:21.585 20:53:16 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:21.585 * Looking for test storage... 00:25:21.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:21.585 20:53:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.585 20:53:16 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.585 20:53:16 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.585 20:53:16 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.585 20:53:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.585 20:53:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.585 20:53:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.585 20:53:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:21.585 20:53:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.585 20:53:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:21.585 20:53:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:21.585 20:53:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:21.585 20:53:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:21.585 20:53:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.585 20:53:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:21.585 20:53:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.585 20:53:16 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.585 20:53:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:23.485 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:23.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:23.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:23.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:23.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:25:23.485 00:25:23.485 --- 10.0.0.2 ping statistics --- 00:25:23.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.485 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:23.485 00:25:23.485 --- 10.0.0.1 ping statistics --- 00:25:23.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.485 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:23.485 20:53:18 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:24.419 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:24.419 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:24.419 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:24.419 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:24.420 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:24.420 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:24.420 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:24.420 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:24.420 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:24.420 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:24.420 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:24.420 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:24.420 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:24.420 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:24.420 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:24.420 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:24.420 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:24.420 20:53:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:24.420 20:53:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1704855 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:24.420 20:53:19 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1704855 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1704855 ']' 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.420 20:53:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:24.678 [2024-07-24 20:53:19.997609] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:25:24.678 [2024-07-24 20:53:19.997680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.678 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.678 [2024-07-24 20:53:20.068913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.678 [2024-07-24 20:53:20.183695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.678 [2024-07-24 20:53:20.183753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.678 [2024-07-24 20:53:20.183782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.678 [2024-07-24 20:53:20.183794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.678 [2024-07-24 20:53:20.183803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.678 [2024-07-24 20:53:20.183846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:25:24.937 20:53:20 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 20:53:20 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.937 20:53:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:24.937 20:53:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 [2024-07-24 20:53:20.335248] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.937 20:53:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:24.937 20:53:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 ************************************ 00:25:24.937 START TEST fio_dif_1_default 00:25:24.937 ************************************ 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 bdev_null0 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:24.937 [2024-07-24 20:53:20.395590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:24.937 { 00:25:24.937 "params": { 00:25:24.937 "name": "Nvme$subsystem", 00:25:24.937 "trtype": "$TEST_TRANSPORT", 00:25:24.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.937 "adrfam": "ipv4", 00:25:24.937 "trsvcid": "$NVMF_PORT", 00:25:24.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.937 "hdgst": ${hdgst:-false}, 00:25:24.937 "ddgst": ${ddgst:-false} 00:25:24.937 }, 00:25:24.937 "method": "bdev_nvme_attach_controller" 00:25:24.937 } 00:25:24.937 EOF 00:25:24.937 )") 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:24.937 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:24.938 "params": { 00:25:24.938 "name": "Nvme0", 00:25:24.938 "trtype": "tcp", 00:25:24.938 "traddr": "10.0.0.2", 00:25:24.938 "adrfam": "ipv4", 00:25:24.938 "trsvcid": "4420", 00:25:24.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:24.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:24.938 "hdgst": false, 00:25:24.938 "ddgst": false 00:25:24.938 }, 00:25:24.938 "method": "bdev_nvme_attach_controller" 00:25:24.938 }' 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:24.938 20:53:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.195 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:25.195 fio-3.35 00:25:25.196 Starting 1 thread 00:25:25.196 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.390 00:25:37.390 filename0: (groupid=0, jobs=1): err= 0: pid=1705084: Wed Jul 24 20:53:31 2024 00:25:37.390 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10023msec) 00:25:37.390 slat (nsec): min=5043, max=64385, avg=9266.74, stdev=3118.33 00:25:37.390 clat (usec): min=40939, max=47811, avg=41907.48, stdev=475.63 00:25:37.390 lat (usec): min=40947, max=47826, avg=41916.74, stdev=475.59 00:25:37.390 clat percentiles (usec): 00:25:37.390 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:25:37.390 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:25:37.390 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:37.390 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:25:37.390 | 99.99th=[47973] 00:25:37.390 bw ( KiB/s): min= 352, max= 384, per=99.60%, avg=380.80, stdev= 9.85, samples=20 00:25:37.390 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:25:37.390 lat (msec) : 50=100.00% 00:25:37.390 cpu : usr=89.45%, sys=9.83%, ctx=28, majf=0, minf=255 00:25:37.390 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:37.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.390 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.390 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:37.390 00:25:37.390 Run status group 0 (all jobs): 00:25:37.390 READ: bw=382KiB/s (391kB/s), 382KiB/s-382KiB/s (391kB/s-391kB/s), io=3824KiB (3916kB), run=10023-10023msec 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 00:25:37.390 real 0m11.289s 00:25:37.390 user 0m10.227s 00:25:37.390 sys 0m1.272s 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 ************************************ 00:25:37.390 END TEST fio_dif_1_default 00:25:37.390 ************************************ 00:25:37.390 20:53:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:37.390 20:53:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:37.390 20:53:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 ************************************ 00:25:37.390 START TEST fio_dif_1_multi_subsystems 00:25:37.390 ************************************ 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 bdev_null0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 [2024-07-24 20:53:31.740194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 bdev_null1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:37.390 { 00:25:37.390 "params": { 00:25:37.390 "name": "Nvme$subsystem", 00:25:37.390 "trtype": "$TEST_TRANSPORT", 00:25:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.390 "adrfam": "ipv4", 00:25:37.390 "trsvcid": "$NVMF_PORT", 00:25:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.390 "hdgst": ${hdgst:-false}, 00:25:37.390 "ddgst": ${ddgst:-false} 00:25:37.390 }, 00:25:37.390 "method": "bdev_nvme_attach_controller" 00:25:37.390 } 00:25:37.390 EOF 00:25:37.390 )") 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:37.390 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:37.391 { 00:25:37.391 "params": { 00:25:37.391 "name": "Nvme$subsystem", 00:25:37.391 "trtype": "$TEST_TRANSPORT", 00:25:37.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.391 "adrfam": "ipv4", 00:25:37.391 "trsvcid": "$NVMF_PORT", 00:25:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.391 "hdgst": ${hdgst:-false}, 00:25:37.391 "ddgst": ${ddgst:-false} 00:25:37.391 }, 00:25:37.391 "method": "bdev_nvme_attach_controller" 00:25:37.391 } 00:25:37.391 EOF 00:25:37.391 )") 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:37.391 "params": { 00:25:37.391 "name": "Nvme0", 00:25:37.391 "trtype": "tcp", 00:25:37.391 "traddr": "10.0.0.2", 00:25:37.391 "adrfam": "ipv4", 00:25:37.391 "trsvcid": "4420", 00:25:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:37.391 "hdgst": false, 00:25:37.391 "ddgst": false 00:25:37.391 }, 00:25:37.391 "method": "bdev_nvme_attach_controller" 00:25:37.391 },{ 00:25:37.391 "params": { 00:25:37.391 "name": "Nvme1", 00:25:37.391 "trtype": "tcp", 00:25:37.391 "traddr": "10.0.0.2", 00:25:37.391 "adrfam": "ipv4", 00:25:37.391 "trsvcid": "4420", 00:25:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:37.391 "hdgst": false, 00:25:37.391 "ddgst": false 00:25:37.391 }, 00:25:37.391 "method": "bdev_nvme_attach_controller" 00:25:37.391 }' 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:37.391 20:53:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.391 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:37.391 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:37.391 fio-3.35 00:25:37.391 Starting 2 threads 00:25:37.391 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.354 00:25:47.354 filename0: (groupid=0, jobs=1): err= 0: pid=1706485: Wed Jul 24 20:53:42 2024 00:25:47.354 read: IOPS=142, BW=570KiB/s (583kB/s)(5696KiB/10001msec) 00:25:47.354 slat (nsec): min=7041, max=40501, avg=8755.96, stdev=2728.88 00:25:47.354 clat (usec): min=683, max=42605, avg=28064.70, stdev=19114.16 00:25:47.354 lat (usec): min=690, max=42622, avg=28073.45, stdev=19114.22 00:25:47.354 clat percentiles (usec): 00:25:47.354 | 1.00th=[ 709], 5.00th=[ 725], 10.00th=[ 734], 20.00th=[ 742], 00:25:47.354 | 30.00th=[ 840], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:47.354 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:25:47.354 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:25:47.354 | 99.99th=[42730] 00:25:47.354 bw ( KiB/s): min= 352, max= 768, per=43.39%, avg=576.00, stdev=189.91, samples=19 00:25:47.354 iops : min= 88, max= 192, avg=144.00, stdev=47.48, samples=19 00:25:47.354 lat (usec) : 750=23.10%, 1000=9.20% 00:25:47.354 lat (msec) : 2=0.56%, 50=67.13% 00:25:47.354 cpu : usr=94.23%, sys=5.48%, ctx=14, majf=0, minf=47 00:25:47.354 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.354 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.354 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:47.354 filename1: (groupid=0, jobs=1): err= 0: pid=1706486: Wed Jul 24 20:53:42 2024 00:25:47.354 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10041msec) 00:25:47.354 slat (nsec): min=6975, max=82310, avg=8789.08, stdev=3538.30 00:25:47.354 clat (usec): min=674, max=42630, avg=21022.34, stdev=20222.14 00:25:47.354 lat (usec): min=682, max=42663, avg=21031.13, stdev=20222.27 00:25:47.354 clat percentiles (usec): 00:25:47.354 | 1.00th=[ 685], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 742], 00:25:47.354 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[41157], 60.00th=[41157], 00:25:47.354 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:47.354 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:25:47.354 | 99.99th=[42730] 00:25:47.354 bw ( KiB/s): min= 704, max= 768, per=57.33%, avg=761.60, stdev=19.70, samples=20 00:25:47.354 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:25:47.354 lat (usec) : 750=21.75%, 1000=27.88% 00:25:47.354 lat (msec) : 2=0.26%, 50=50.10% 00:25:47.354 cpu : usr=94.68%, sys=5.03%, ctx=14, majf=0, minf=249 00:25:47.354 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.354 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.354 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:47.354 00:25:47.354 Run status group 0 (all jobs): 00:25:47.354 READ: bw=1327KiB/s (1359kB/s), 570KiB/s-760KiB/s (583kB/s-778kB/s), io=13.0MiB (13.6MB), run=10001-10041msec 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.612 00:25:47.612 real 0m11.407s 00:25:47.612 user 0m20.284s 00:25:47.612 sys 0m1.361s 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.612 20:53:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.612 ************************************ 00:25:47.612 END TEST fio_dif_1_multi_subsystems 00:25:47.612 ************************************ 00:25:47.612 20:53:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:47.612 20:53:43 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:47.612 20:53:43 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.612 20:53:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.613 ************************************ 00:25:47.613 START TEST fio_dif_rand_params 00:25:47.613 ************************************ 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:47.613 bdev_null0 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.613 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:47.874 [2024-07-24 20:53:43.196274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:47.874 { 00:25:47.874 "params": { 00:25:47.874 "name": "Nvme$subsystem", 00:25:47.874 "trtype": "$TEST_TRANSPORT", 00:25:47.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.874 "adrfam": "ipv4", 00:25:47.874 "trsvcid": "$NVMF_PORT", 00:25:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.874 "hdgst": ${hdgst:-false}, 00:25:47.874 "ddgst": ${ddgst:-false} 00:25:47.874 }, 00:25:47.874 "method": "bdev_nvme_attach_controller" 00:25:47.874 } 00:25:47.874 EOF 00:25:47.874 )") 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:47.874 "params": { 00:25:47.874 "name": "Nvme0", 00:25:47.874 "trtype": "tcp", 00:25:47.874 "traddr": "10.0.0.2", 00:25:47.874 "adrfam": "ipv4", 00:25:47.874 "trsvcid": "4420", 00:25:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:47.874 "hdgst": false, 00:25:47.874 "ddgst": false 00:25:47.874 }, 00:25:47.874 "method": "bdev_nvme_attach_controller" 00:25:47.874 }' 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:47.874 20:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.132 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:48.132 ... 00:25:48.132 fio-3.35 00:25:48.132 Starting 3 threads 00:25:48.132 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.691 00:25:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1707882: Wed Jul 24 20:53:49 2024 00:25:54.691 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(139MiB/5044msec) 00:25:54.691 slat (usec): min=4, max=211, avg=15.62, stdev= 8.25 00:25:54.691 clat (usec): min=4919, max=92285, avg=13587.54, stdev=10225.03 00:25:54.691 lat (usec): min=4934, max=92299, avg=13603.16, stdev=10227.33 00:25:54.691 clat percentiles (usec): 00:25:54.691 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 7177], 20.00th=[ 8979], 00:25:54.691 | 30.00th=[10028], 40.00th=[11207], 50.00th=[11994], 60.00th=[12649], 00:25:54.691 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15664], 95.00th=[18482], 00:25:54.691 | 99.00th=[55837], 99.50th=[57934], 99.90th=[91751], 99.95th=[92799], 00:25:54.691 | 99.99th=[92799] 00:25:54.691 bw ( KiB/s): min=23599, max=33280, per=35.53%, avg=28318.30, stdev=3280.18, samples=10 00:25:54.691 iops : min= 184, max= 260, avg=221.20, stdev=25.69, samples=10 00:25:54.691 lat (msec) : 10=29.58%, 20=65.46%, 50=1.35%, 100=3.61% 00:25:54.691 cpu : usr=91.35%, sys=8.17%, ctx=23, majf=0, minf=155 00:25:54.691 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.691 issued rwts: total=1109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1707883: Wed Jul 24 20:53:49 2024 00:25:54.691 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(135MiB/5004msec) 00:25:54.691 slat (nsec): min=4522, max=43634, avg=13873.70, stdev=4211.11 00:25:54.691 clat (usec): min=4718, max=88974, avg=13880.81, stdev=10791.51 00:25:54.691 lat (usec): min=4730, max=88987, avg=13894.68, stdev=10791.51 00:25:54.691 clat percentiles (usec): 00:25:54.691 | 1.00th=[ 5342], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[ 8848], 00:25:54.691 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11338], 60.00th=[11994], 00:25:54.691 | 70.00th=[12780], 80.00th=[13698], 90.00th=[16057], 95.00th=[49546], 00:25:54.691 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[88605], 00:25:54.691 | 99.99th=[88605] 00:25:54.691 bw ( KiB/s): min=23296, max=35328, per=34.59%, avg=27571.20, stdev=4192.72, samples=10 00:25:54.691 iops : min= 182, max= 276, avg=215.40, stdev=32.76, samples=10 00:25:54.691 lat (msec) : 10=31.11%, 20=61.76%, 50=2.78%, 100=4.35% 00:25:54.691 cpu : usr=90.61%, sys=8.93%, ctx=13, majf=0, minf=99 00:25:54.691 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.691 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1707884: Wed Jul 24 20:53:49 2024 00:25:54.691 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(119MiB/5044msec) 00:25:54.691 slat (nsec): min=4907, max=48007, avg=14555.25, stdev=4725.46 00:25:54.691 clat (usec): min=5154, max=91817, avg=15830.48, stdev=12314.11 00:25:54.691 lat (usec): min=5167, max=91829, avg=15845.04, stdev=12314.08 00:25:54.691 clat percentiles (usec): 00:25:54.691 | 1.00th=[ 5538], 5.00th=[ 7373], 10.00th=[ 8225], 20.00th=[ 9241], 00:25:54.691 | 30.00th=[10683], 40.00th=[11863], 50.00th=[12649], 60.00th=[13435], 00:25:54.691 | 70.00th=[14746], 80.00th=[15926], 90.00th=[18482], 95.00th=[51119], 00:25:54.691 | 99.00th=[56361], 99.50th=[57934], 99.90th=[91751], 99.95th=[91751], 00:25:54.691 | 99.99th=[91751] 00:25:54.691 bw ( KiB/s): min=19200, max=28672, per=30.51%, avg=24320.00, stdev=2827.61, samples=10 00:25:54.691 iops : min= 150, max= 224, avg=190.00, stdev=22.09, samples=10 00:25:54.691 lat (msec) : 10=25.42%, 20=65.44%, 50=3.15%, 100=5.99% 00:25:54.691 cpu : usr=91.29%, sys=8.27%, ctx=7, majf=0, minf=98 00:25:54.691 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.691 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:54.691 00:25:54.691 Run status group 0 (all jobs): 00:25:54.691 READ: bw=77.8MiB/s (81.6MB/s), 23.6MiB/s-27.5MiB/s (24.7MB/s-28.8MB/s), io=393MiB (412MB), run=5004-5044msec 00:25:54.691 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:54.691 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:54.691 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:54.691 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:54.691 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:54.691 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:54.691 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 bdev_null0 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 [2024-07-24 20:53:49.428922] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 bdev_null1 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 bdev_null2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:54.692 { 00:25:54.692 "params": { 00:25:54.692 "name": "Nvme$subsystem", 00:25:54.692 "trtype": "$TEST_TRANSPORT", 00:25:54.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.692 "adrfam": "ipv4", 00:25:54.692 "trsvcid": "$NVMF_PORT", 00:25:54.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.692 "hdgst": ${hdgst:-false}, 00:25:54.692 "ddgst": ${ddgst:-false} 00:25:54.692 }, 00:25:54.692 "method": "bdev_nvme_attach_controller" 00:25:54.692 } 00:25:54.692 EOF 00:25:54.692 )") 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:54.692 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:54.692 { 00:25:54.692 "params": { 00:25:54.692 "name": "Nvme$subsystem", 00:25:54.692 "trtype": "$TEST_TRANSPORT", 00:25:54.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.692 "adrfam": "ipv4", 00:25:54.692 "trsvcid": "$NVMF_PORT", 00:25:54.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.692 "hdgst": ${hdgst:-false}, 00:25:54.692 "ddgst": ${ddgst:-false} 00:25:54.692 }, 00:25:54.693 "method": "bdev_nvme_attach_controller" 00:25:54.693 } 00:25:54.693 EOF 00:25:54.693 )") 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:54.693 { 00:25:54.693 "params": { 00:25:54.693 "name": "Nvme$subsystem", 00:25:54.693 "trtype": "$TEST_TRANSPORT", 00:25:54.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.693 "adrfam": "ipv4", 00:25:54.693 "trsvcid": "$NVMF_PORT", 00:25:54.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.693 "hdgst": ${hdgst:-false}, 00:25:54.693 "ddgst": ${ddgst:-false} 00:25:54.693 }, 00:25:54.693 "method": "bdev_nvme_attach_controller" 00:25:54.693 } 00:25:54.693 EOF 00:25:54.693 )") 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:54.693 "params": { 00:25:54.693 "name": "Nvme0", 00:25:54.693 "trtype": "tcp", 00:25:54.693 "traddr": "10.0.0.2", 00:25:54.693 "adrfam": "ipv4", 00:25:54.693 "trsvcid": "4420", 00:25:54.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:54.693 "hdgst": false, 00:25:54.693 "ddgst": false 00:25:54.693 }, 00:25:54.693 "method": "bdev_nvme_attach_controller" 00:25:54.693 },{ 00:25:54.693 "params": { 00:25:54.693 "name": "Nvme1", 00:25:54.693 "trtype": "tcp", 00:25:54.693 "traddr": "10.0.0.2", 00:25:54.693 "adrfam": "ipv4", 00:25:54.693 "trsvcid": "4420", 00:25:54.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:54.693 "hdgst": false, 00:25:54.693 "ddgst": false 00:25:54.693 }, 00:25:54.693 "method": "bdev_nvme_attach_controller" 00:25:54.693 },{ 00:25:54.693 "params": { 00:25:54.693 "name": "Nvme2", 00:25:54.693 "trtype": "tcp", 00:25:54.693 "traddr": "10.0.0.2", 00:25:54.693 "adrfam": "ipv4", 00:25:54.693 "trsvcid": "4420", 00:25:54.693 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:54.693 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:54.693 "hdgst": false, 00:25:54.693 "ddgst": false 00:25:54.693 }, 00:25:54.693 "method": "bdev_nvme_attach_controller" 00:25:54.693 }' 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:54.693 20:53:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.693 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:54.693 ... 00:25:54.693 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:54.693 ... 00:25:54.693 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:54.693 ... 00:25:54.693 fio-3.35 00:25:54.693 Starting 24 threads 00:25:54.693 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.927 00:26:06.927 filename0: (groupid=0, jobs=1): err= 0: pid=1708747: Wed Jul 24 20:54:00 2024 00:26:06.927 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10027msec) 00:26:06.927 slat (nsec): min=5308, max=95569, avg=26169.57, stdev=15054.02 00:26:06.927 clat (usec): min=6882, max=43506, avg=33091.37, stdev=2157.28 00:26:06.927 lat (usec): min=6887, max=43531, avg=33117.53, stdev=2157.54 00:26:06.927 clat percentiles (usec): 00:26:06.927 | 1.00th=[24773], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:06.927 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:06.927 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[36439], 00:26:06.927 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:26:06.927 | 99.99th=[43254] 00:26:06.927 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1920.15, stdev=71.93, samples=20 00:26:06.927 iops : min= 448, max= 512, avg=480.00, stdev=17.98, samples=20 00:26:06.927 lat (msec) : 10=0.33%, 50=99.67% 00:26:06.927 cpu : usr=97.53%, sys=1.97%, ctx=25, majf=0, minf=61 00:26:06.927 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.927 filename0: (groupid=0, jobs=1): err= 0: pid=1708748: Wed Jul 24 20:54:00 2024 00:26:06.927 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.7MiB/10029msec) 00:26:06.927 slat (nsec): min=8621, max=91465, avg=22237.34, stdev=15477.01 00:26:06.927 clat (usec): min=23960, max=46605, avg=33367.56, stdev=2483.45 00:26:06.927 lat (usec): min=23984, max=46639, avg=33389.80, stdev=2487.40 00:26:06.927 clat percentiles (usec): 00:26:06.927 | 1.00th=[24249], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:06.927 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:06.927 | 70.00th=[33162], 80.00th=[33817], 90.00th=[35914], 95.00th=[36439], 00:26:06.927 | 99.00th=[42206], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:26:06.927 | 99.99th=[46400] 00:26:06.927 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1903.60, stdev=92.81, samples=20 00:26:06.927 iops : min= 416, max= 512, avg=475.90, stdev=23.20, samples=20 00:26:06.927 lat (msec) : 50=100.00% 00:26:06.927 cpu : usr=96.72%, sys=2.10%, ctx=143, majf=0, minf=70 00:26:06.927 IO depths : 1=5.0%, 2=11.0%, 4=23.8%, 8=52.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 issued rwts: total=4775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.927 filename0: (groupid=0, jobs=1): err= 0: pid=1708749: Wed Jul 24 20:54:00 2024 00:26:06.927 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:26:06.927 slat (nsec): min=12007, max=85432, avg=42659.75, stdev=12703.16 00:26:06.927 clat (usec): min=19273, max=79057, avg=33192.01, stdev=2634.35 00:26:06.927 lat (usec): min=19314, max=79081, avg=33234.67, stdev=2632.33 00:26:06.927 clat percentiles (usec): 00:26:06.927 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.927 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.927 | 70.00th=[32900], 80.00th=[33162], 90.00th=[35914], 95.00th=[36439], 00:26:06.927 | 99.00th=[36963], 99.50th=[43254], 99.90th=[68682], 99.95th=[68682], 00:26:06.927 | 99.99th=[79168] 00:26:06.927 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.927 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.927 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.927 cpu : usr=97.84%, sys=1.43%, ctx=142, majf=0, minf=42 00:26:06.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.927 filename0: (groupid=0, jobs=1): err= 0: pid=1708750: Wed Jul 24 20:54:00 2024 00:26:06.927 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10009msec) 00:26:06.927 slat (usec): min=9, max=143, avg=42.61, stdev=18.35 00:26:06.927 clat (usec): min=31860, max=46905, avg=33184.69, stdev=1659.12 00:26:06.927 lat (usec): min=31892, max=46922, avg=33227.30, stdev=1660.76 00:26:06.927 clat percentiles (usec): 00:26:06.927 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.927 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.927 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35390], 95.00th=[35914], 00:26:06.927 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:26:06.927 | 99.99th=[46924] 00:26:06.927 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1900.80, stdev=88.99, samples=20 00:26:06.927 iops : min= 416, max= 512, avg=475.20, stdev=22.25, samples=20 00:26:06.927 lat (msec) : 50=100.00% 00:26:06.927 cpu : usr=98.06%, sys=1.52%, ctx=20, majf=0, minf=36 00:26:06.927 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.927 filename0: (groupid=0, jobs=1): err= 0: pid=1708751: Wed Jul 24 20:54:00 2024 00:26:06.927 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.7MiB/10024msec) 00:26:06.927 slat (nsec): min=8183, max=97995, avg=26133.76, stdev=10223.58 00:26:06.927 clat (usec): min=20704, max=57123, avg=33231.21, stdev=1600.95 00:26:06.927 lat (usec): min=20736, max=57140, avg=33257.34, stdev=1600.19 00:26:06.927 clat percentiles (usec): 00:26:06.927 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:06.927 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:06.927 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[36439], 00:26:06.927 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43779], 99.95th=[43779], 00:26:06.927 | 99.99th=[56886] 00:26:06.927 bw ( KiB/s): min= 1792, max= 2032, per=4.18%, avg=1912.95, stdev=71.76, samples=20 00:26:06.927 iops : min= 448, max= 508, avg=478.20, stdev=18.00, samples=20 00:26:06.927 lat (msec) : 50=99.96%, 100=0.04% 00:26:06.927 cpu : usr=98.51%, sys=1.11%, ctx=17, majf=0, minf=50 00:26:06.927 IO depths : 1=1.0%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:06.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.927 issued rwts: total=4798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.927 filename0: (groupid=0, jobs=1): err= 0: pid=1708752: Wed Jul 24 20:54:00 2024 00:26:06.927 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:26:06.927 slat (usec): min=11, max=100, avg=43.15, stdev=14.41 00:26:06.927 clat (usec): min=19438, max=69227, avg=33204.58, stdev=2568.86 00:26:06.927 lat (usec): min=19477, max=69244, avg=33247.72, stdev=2566.51 00:26:06.927 clat percentiles (usec): 00:26:06.927 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.927 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.927 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.927 | 99.00th=[36963], 99.50th=[43254], 99.90th=[68682], 99.95th=[68682], 00:26:06.927 | 99.99th=[69731] 00:26:06.927 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.927 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.927 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.928 cpu : usr=97.90%, sys=1.67%, ctx=14, majf=0, minf=43 00:26:06.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename0: (groupid=0, jobs=1): err= 0: pid=1708753: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10029msec) 00:26:06.928 slat (usec): min=8, max=114, avg=25.88, stdev=22.02 00:26:06.928 clat (usec): min=24723, max=46861, avg=33287.30, stdev=1563.87 00:26:06.928 lat (usec): min=24737, max=46884, avg=33313.18, stdev=1568.10 00:26:06.928 clat percentiles (usec): 00:26:06.928 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:06.928 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:06.928 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[35914], 00:26:06.928 | 99.00th=[38536], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:26:06.928 | 99.99th=[46924] 00:26:06.928 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1907.20, stdev=91.93, samples=20 00:26:06.928 iops : min= 416, max= 512, avg=476.80, stdev=22.98, samples=20 00:26:06.928 lat (msec) : 50=100.00% 00:26:06.928 cpu : usr=93.32%, sys=3.77%, ctx=414, majf=0, minf=39 00:26:06.928 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename0: (groupid=0, jobs=1): err= 0: pid=1708754: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.8MiB/10020msec) 00:26:06.928 slat (usec): min=8, max=103, avg=16.49, stdev= 9.53 00:26:06.928 clat (usec): min=12517, max=43463, avg=33256.43, stdev=1792.55 00:26:06.928 lat (usec): min=12558, max=43481, avg=33272.92, stdev=1791.09 00:26:06.928 clat percentiles (usec): 00:26:06.928 | 1.00th=[28967], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:06.928 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:06.928 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.928 | 99.00th=[36963], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:26:06.928 | 99.99th=[43254] 00:26:06.928 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.75, stdev=50.06, samples=20 00:26:06.928 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:26:06.928 lat (msec) : 20=0.33%, 50=99.67% 00:26:06.928 cpu : usr=97.92%, sys=1.70%, ctx=20, majf=0, minf=50 00:26:06.928 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename1: (groupid=0, jobs=1): err= 0: pid=1708755: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10009msec) 00:26:06.928 slat (nsec): min=9677, max=97891, avg=39077.63, stdev=12368.63 00:26:06.928 clat (usec): min=29059, max=46999, avg=33268.80, stdev=1650.18 00:26:06.928 lat (usec): min=29072, max=47014, avg=33307.88, stdev=1651.09 00:26:06.928 clat percentiles (usec): 00:26:06.928 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:26:06.928 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:06.928 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[35914], 00:26:06.928 | 99.00th=[43254], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:26:06.928 | 99.99th=[46924] 00:26:06.928 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1900.80, stdev=95.38, samples=20 00:26:06.928 iops : min= 416, max= 512, avg=475.20, stdev=23.85, samples=20 00:26:06.928 lat (msec) : 50=100.00% 00:26:06.928 cpu : usr=96.15%, sys=2.42%, ctx=106, majf=0, minf=46 00:26:06.928 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename1: (groupid=0, jobs=1): err= 0: pid=1708756: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:26:06.928 slat (usec): min=14, max=109, avg=41.90, stdev=12.76 00:26:06.928 clat (usec): min=19462, max=70097, avg=33196.94, stdev=2606.71 00:26:06.928 lat (usec): min=19483, max=70128, avg=33238.84, stdev=2605.07 00:26:06.928 clat percentiles (usec): 00:26:06.928 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.928 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.928 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.928 | 99.00th=[36963], 99.50th=[43254], 99.90th=[69731], 99.95th=[69731], 00:26:06.928 | 99.99th=[69731] 00:26:06.928 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.928 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.928 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.928 cpu : usr=98.22%, sys=1.39%, ctx=14, majf=0, minf=37 00:26:06.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename1: (groupid=0, jobs=1): err= 0: pid=1708757: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10028msec) 00:26:06.928 slat (nsec): min=6556, max=81492, avg=26320.92, stdev=9136.74 00:26:06.928 clat (usec): min=8481, max=43597, avg=33089.22, stdev=2124.30 00:26:06.928 lat (usec): min=8488, max=43617, avg=33115.54, stdev=2123.64 00:26:06.928 clat percentiles (usec): 00:26:06.928 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:06.928 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:06.928 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[36439], 00:26:06.928 | 99.00th=[36963], 99.50th=[37487], 99.90th=[43779], 99.95th=[43779], 00:26:06.928 | 99.99th=[43779] 00:26:06.928 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1920.00, stdev=71.93, samples=20 00:26:06.928 iops : min= 448, max= 512, avg=480.00, stdev=17.98, samples=20 00:26:06.928 lat (msec) : 10=0.33%, 50=99.67% 00:26:06.928 cpu : usr=98.17%, sys=1.43%, ctx=22, majf=0, minf=46 00:26:06.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename1: (groupid=0, jobs=1): err= 0: pid=1708758: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10009msec) 00:26:06.928 slat (usec): min=8, max=123, avg=35.19, stdev=18.41 00:26:06.928 clat (usec): min=26687, max=46537, avg=33298.54, stdev=1797.88 00:26:06.928 lat (usec): min=26726, max=46563, avg=33333.73, stdev=1795.20 00:26:06.928 clat percentiles (usec): 00:26:06.928 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:26:06.928 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:06.928 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.928 | 99.00th=[43254], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:26:06.928 | 99.99th=[46400] 00:26:06.928 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1900.80, stdev=95.38, samples=20 00:26:06.928 iops : min= 416, max= 512, avg=475.20, stdev=23.85, samples=20 00:26:06.928 lat (msec) : 50=100.00% 00:26:06.928 cpu : usr=94.31%, sys=3.36%, ctx=221, majf=0, minf=33 00:26:06.928 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename1: (groupid=0, jobs=1): err= 0: pid=1708759: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:26:06.928 slat (nsec): min=12018, max=83159, avg=41354.31, stdev=13115.54 00:26:06.928 clat (usec): min=19314, max=69360, avg=33184.35, stdev=2576.95 00:26:06.928 lat (usec): min=19341, max=69385, avg=33225.70, stdev=2575.35 00:26:06.928 clat percentiles (usec): 00:26:06.928 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.928 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.928 | 70.00th=[32900], 80.00th=[33162], 90.00th=[35914], 95.00th=[36439], 00:26:06.928 | 99.00th=[36963], 99.50th=[43254], 99.90th=[69731], 99.95th=[69731], 00:26:06.928 | 99.99th=[69731] 00:26:06.928 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.928 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.928 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.928 cpu : usr=97.69%, sys=1.67%, ctx=94, majf=0, minf=41 00:26:06.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.928 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.928 filename1: (groupid=0, jobs=1): err= 0: pid=1708760: Wed Jul 24 20:54:00 2024 00:26:06.928 read: IOPS=478, BW=1914KiB/s (1959kB/s)(18.7MiB/10013msec) 00:26:06.928 slat (nsec): min=12939, max=77925, avg=41588.65, stdev=10210.35 00:26:06.928 clat (usec): min=19464, max=49024, avg=33082.52, stdev=1708.86 00:26:06.929 lat (usec): min=19509, max=49061, avg=33124.11, stdev=1709.08 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:26:06.929 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.929 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35390], 95.00th=[35914], 00:26:06.929 | 99.00th=[36963], 99.50th=[42730], 99.90th=[49021], 99.95th=[49021], 00:26:06.929 | 99.99th=[49021] 00:26:06.929 bw ( KiB/s): min= 1712, max= 2048, per=4.17%, avg=1909.05, stdev=87.97, samples=19 00:26:06.929 iops : min= 428, max= 512, avg=477.26, stdev=21.99, samples=19 00:26:06.929 lat (msec) : 20=0.31%, 50=99.69% 00:26:06.929 cpu : usr=93.14%, sys=4.08%, ctx=144, majf=0, minf=41 00:26:06.929 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:06.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.929 filename1: (groupid=0, jobs=1): err= 0: pid=1708761: Wed Jul 24 20:54:00 2024 00:26:06.929 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:26:06.929 slat (usec): min=9, max=162, avg=42.23, stdev=20.34 00:26:06.929 clat (usec): min=19269, max=70302, avg=33142.11, stdev=2611.88 00:26:06.929 lat (usec): min=19279, max=70336, avg=33184.34, stdev=2613.20 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.929 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.929 | 70.00th=[32900], 80.00th=[33162], 90.00th=[35390], 95.00th=[35914], 00:26:06.929 | 99.00th=[36963], 99.50th=[43254], 99.90th=[69731], 99.95th=[69731], 00:26:06.929 | 99.99th=[70779] 00:26:06.929 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.929 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.929 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.929 cpu : usr=95.86%, sys=2.42%, ctx=108, majf=0, minf=51 00:26:06.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:06.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.929 filename1: (groupid=0, jobs=1): err= 0: pid=1708762: Wed Jul 24 20:54:00 2024 00:26:06.929 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10024msec) 00:26:06.929 slat (nsec): min=8470, max=73166, avg=22772.04, stdev=10283.21 00:26:06.929 clat (usec): min=12356, max=46326, avg=33243.33, stdev=1836.71 00:26:06.929 lat (usec): min=12402, max=46349, avg=33266.10, stdev=1835.37 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[29230], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:06.929 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:06.929 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.929 | 99.00th=[36963], 99.50th=[37487], 99.90th=[43254], 99.95th=[43779], 00:26:06.929 | 99.99th=[46400] 00:26:06.929 bw ( KiB/s): min= 1792, max= 2032, per=4.18%, avg=1913.75, stdev=48.42, samples=20 00:26:06.929 iops : min= 448, max= 508, avg=478.40, stdev=12.20, samples=20 00:26:06.929 lat (msec) : 20=0.33%, 50=99.67% 00:26:06.929 cpu : usr=95.99%, sys=2.40%, ctx=65, majf=0, minf=55 00:26:06.929 IO depths : 1=1.0%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:06.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.929 filename2: (groupid=0, jobs=1): err= 0: pid=1708763: Wed Jul 24 20:54:00 2024 00:26:06.929 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:26:06.929 slat (usec): min=8, max=115, avg=42.37, stdev=13.02 00:26:06.929 clat (usec): min=19482, max=79147, avg=33205.34, stdev=2638.35 00:26:06.929 lat (usec): min=19518, max=79168, avg=33247.71, stdev=2636.00 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.929 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.929 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.929 | 99.00th=[36963], 99.50th=[43254], 99.90th=[69731], 99.95th=[69731], 00:26:06.929 | 99.99th=[79168] 00:26:06.929 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.929 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.929 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.929 cpu : usr=98.09%, sys=1.49%, ctx=18, majf=0, minf=34 00:26:06.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.929 filename2: (groupid=0, jobs=1): err= 0: pid=1708764: Wed Jul 24 20:54:00 2024 00:26:06.929 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10019msec) 00:26:06.929 slat (usec): min=10, max=123, avg=47.90, stdev=17.82 00:26:06.929 clat (usec): min=19613, max=55076, avg=33106.34, stdev=1702.71 00:26:06.929 lat (usec): min=19665, max=55112, avg=33154.24, stdev=1702.87 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.929 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.929 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35390], 95.00th=[35914], 00:26:06.929 | 99.00th=[36963], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:26:06.929 | 99.99th=[55313] 00:26:06.929 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1907.20, stdev=91.93, samples=20 00:26:06.929 iops : min= 416, max= 512, avg=476.80, stdev=22.98, samples=20 00:26:06.929 lat (msec) : 20=0.33%, 50=99.62%, 100=0.04% 00:26:06.929 cpu : usr=97.99%, sys=1.62%, ctx=11, majf=0, minf=27 00:26:06.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.929 filename2: (groupid=0, jobs=1): err= 0: pid=1708765: Wed Jul 24 20:54:00 2024 00:26:06.929 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10025msec) 00:26:06.929 slat (nsec): min=8330, max=53872, avg=25765.46, stdev=8061.15 00:26:06.929 clat (usec): min=18669, max=43551, avg=33171.88, stdev=1527.56 00:26:06.929 lat (usec): min=18710, max=43581, avg=33197.65, stdev=1527.23 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:06.929 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:06.929 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[36439], 00:26:06.929 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:26:06.929 | 99.99th=[43779] 00:26:06.929 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.75, stdev=77.17, samples=20 00:26:06.929 iops : min= 448, max= 512, avg=478.40, stdev=19.35, samples=20 00:26:06.929 lat (msec) : 20=0.04%, 50=99.96% 00:26:06.929 cpu : usr=96.18%, sys=2.39%, ctx=94, majf=0, minf=45 00:26:06.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.929 filename2: (groupid=0, jobs=1): err= 0: pid=1708766: Wed Jul 24 20:54:00 2024 00:26:06.929 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:26:06.929 slat (usec): min=10, max=107, avg=44.67, stdev=14.68 00:26:06.929 clat (usec): min=19376, max=70049, avg=33162.88, stdev=2599.46 00:26:06.929 lat (usec): min=19410, max=70082, avg=33207.55, stdev=2599.68 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.929 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.929 | 70.00th=[32900], 80.00th=[33162], 90.00th=[35390], 95.00th=[35914], 00:26:06.929 | 99.00th=[36963], 99.50th=[43254], 99.90th=[69731], 99.95th=[69731], 00:26:06.929 | 99.99th=[69731] 00:26:06.929 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.929 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.929 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.929 cpu : usr=98.06%, sys=1.55%, ctx=29, majf=0, minf=28 00:26:06.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:06.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.929 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.929 filename2: (groupid=0, jobs=1): err= 0: pid=1708767: Wed Jul 24 20:54:00 2024 00:26:06.929 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:26:06.929 slat (usec): min=12, max=107, avg=41.48, stdev=12.63 00:26:06.929 clat (usec): min=19316, max=69226, avg=33189.36, stdev=2567.42 00:26:06.929 lat (usec): min=19354, max=69261, avg=33230.84, stdev=2566.09 00:26:06.929 clat percentiles (usec): 00:26:06.929 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:26:06.929 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:26:06.929 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.929 | 99.00th=[36963], 99.50th=[43254], 99.90th=[68682], 99.95th=[68682], 00:26:06.929 | 99.99th=[69731] 00:26:06.929 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1899.79, stdev=97.88, samples=19 00:26:06.929 iops : min= 384, max= 512, avg=474.95, stdev=24.47, samples=19 00:26:06.929 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:26:06.929 cpu : usr=98.36%, sys=1.25%, ctx=11, majf=0, minf=29 00:26:06.930 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.930 filename2: (groupid=0, jobs=1): err= 0: pid=1708768: Wed Jul 24 20:54:00 2024 00:26:06.930 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10025msec) 00:26:06.930 slat (nsec): min=9498, max=73373, avg=27124.33, stdev=8571.04 00:26:06.930 clat (usec): min=21528, max=43561, avg=33178.55, stdev=1510.14 00:26:06.930 lat (usec): min=21561, max=43580, avg=33205.68, stdev=1508.72 00:26:06.930 clat percentiles (usec): 00:26:06.930 | 1.00th=[30278], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:06.930 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:06.930 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[36439], 00:26:06.930 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:26:06.930 | 99.99th=[43779] 00:26:06.930 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=77.42, samples=20 00:26:06.930 iops : min= 448, max= 512, avg=478.40, stdev=19.35, samples=20 00:26:06.930 lat (msec) : 50=100.00% 00:26:06.930 cpu : usr=98.21%, sys=1.40%, ctx=16, majf=0, minf=27 00:26:06.930 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.930 filename2: (groupid=0, jobs=1): err= 0: pid=1708769: Wed Jul 24 20:54:00 2024 00:26:06.930 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10028msec) 00:26:06.930 slat (nsec): min=7979, max=96378, avg=23747.26, stdev=16078.09 00:26:06.930 clat (usec): min=9702, max=51745, avg=33113.35, stdev=2191.21 00:26:06.930 lat (usec): min=9711, max=51787, avg=33137.09, stdev=2194.35 00:26:06.930 clat percentiles (usec): 00:26:06.930 | 1.00th=[24511], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:06.930 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:26:06.930 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35390], 95.00th=[35914], 00:26:06.930 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43779], 99.95th=[51643], 00:26:06.930 | 99.99th=[51643] 00:26:06.930 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1920.00, stdev=71.93, samples=20 00:26:06.930 iops : min= 448, max= 512, avg=480.00, stdev=17.98, samples=20 00:26:06.930 lat (msec) : 10=0.29%, 20=0.21%, 50=99.42%, 100=0.08% 00:26:06.930 cpu : usr=97.94%, sys=1.69%, ctx=19, majf=0, minf=73 00:26:06.930 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.930 filename2: (groupid=0, jobs=1): err= 0: pid=1708770: Wed Jul 24 20:54:00 2024 00:26:06.930 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10029msec) 00:26:06.930 slat (nsec): min=8196, max=79561, avg=17481.85, stdev=9070.33 00:26:06.930 clat (usec): min=23943, max=46537, avg=33371.99, stdev=1917.38 00:26:06.930 lat (usec): min=23959, max=46562, avg=33389.48, stdev=1919.68 00:26:06.930 clat percentiles (usec): 00:26:06.930 | 1.00th=[25822], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:26:06.930 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:06.930 | 70.00th=[33162], 80.00th=[33424], 90.00th=[35914], 95.00th=[36439], 00:26:06.930 | 99.00th=[41681], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:26:06.930 | 99.99th=[46400] 00:26:06.930 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1907.20, stdev=91.93, samples=20 00:26:06.930 iops : min= 416, max= 512, avg=476.80, stdev=22.98, samples=20 00:26:06.930 lat (msec) : 50=100.00% 00:26:06.930 cpu : usr=98.07%, sys=1.48%, ctx=8, majf=0, minf=44 00:26:06.930 IO depths : 1=5.6%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:06.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.930 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:06.930 00:26:06.930 Run status group 0 (all jobs): 00:26:06.930 READ: bw=44.7MiB/s (46.9MB/s), 1904KiB/s-1921KiB/s (1950kB/s-1967kB/s), io=448MiB (470MB), run=10006-10029msec 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 bdev_null0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.930 [2024-07-24 20:54:01.293389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:06.930 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.931 bdev_null1 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.931 { 00:26:06.931 "params": { 00:26:06.931 "name": "Nvme$subsystem", 00:26:06.931 "trtype": "$TEST_TRANSPORT", 00:26:06.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.931 "adrfam": "ipv4", 00:26:06.931 "trsvcid": "$NVMF_PORT", 00:26:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.931 "hdgst": ${hdgst:-false}, 00:26:06.931 "ddgst": ${ddgst:-false} 00:26:06.931 }, 00:26:06.931 "method": "bdev_nvme_attach_controller" 00:26:06.931 } 00:26:06.931 EOF 00:26:06.931 )") 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:06.931 { 00:26:06.931 "params": { 00:26:06.931 "name": "Nvme$subsystem", 00:26:06.931 "trtype": "$TEST_TRANSPORT", 00:26:06.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.931 "adrfam": "ipv4", 00:26:06.931 "trsvcid": "$NVMF_PORT", 00:26:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.931 "hdgst": ${hdgst:-false}, 00:26:06.931 "ddgst": ${ddgst:-false} 00:26:06.931 }, 00:26:06.931 "method": "bdev_nvme_attach_controller" 00:26:06.931 } 00:26:06.931 EOF 00:26:06.931 )") 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:06.931 "params": { 00:26:06.931 "name": "Nvme0", 00:26:06.931 "trtype": "tcp", 00:26:06.931 "traddr": "10.0.0.2", 00:26:06.931 "adrfam": "ipv4", 00:26:06.931 "trsvcid": "4420", 00:26:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.931 "hdgst": false, 00:26:06.931 "ddgst": false 00:26:06.931 }, 00:26:06.931 "method": "bdev_nvme_attach_controller" 00:26:06.931 },{ 00:26:06.931 "params": { 00:26:06.931 "name": "Nvme1", 00:26:06.931 "trtype": "tcp", 00:26:06.931 "traddr": "10.0.0.2", 00:26:06.931 "adrfam": "ipv4", 00:26:06.931 "trsvcid": "4420", 00:26:06.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.931 "hdgst": false, 00:26:06.931 "ddgst": false 00:26:06.931 }, 00:26:06.931 "method": "bdev_nvme_attach_controller" 00:26:06.931 }' 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:06.931 20:54:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.931 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:06.931 ... 00:26:06.931 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:06.931 ... 00:26:06.931 fio-3.35 00:26:06.931 Starting 4 threads 00:26:06.931 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.186 00:26:12.186 filename0: (groupid=0, jobs=1): err= 0: pid=1710268: Wed Jul 24 20:54:07 2024 00:26:12.186 read: IOPS=1847, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5003msec) 00:26:12.186 slat (nsec): min=5307, max=75837, avg=13068.88, stdev=5257.44 00:26:12.186 clat (usec): min=760, max=7846, avg=4284.94, stdev=695.46 00:26:12.186 lat (usec): min=779, max=7859, avg=4298.01, stdev=695.47 00:26:12.186 clat percentiles (usec): 00:26:12.186 | 1.00th=[ 2573], 5.00th=[ 3261], 10.00th=[ 3589], 20.00th=[ 3916], 00:26:12.187 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:12.187 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5669], 00:26:12.187 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[ 7767], 00:26:12.187 | 99.99th=[ 7832] 00:26:12.187 bw ( KiB/s): min=14048, max=15520, per=25.07%, avg=14778.80, stdev=555.47, samples=10 00:26:12.187 iops : min= 1756, max= 1940, avg=1847.30, stdev=69.49, samples=10 00:26:12.187 lat (usec) : 1000=0.03% 00:26:12.187 lat (msec) : 2=0.35%, 4=23.92%, 10=75.70% 00:26:12.187 cpu : usr=92.24%, sys=7.16%, ctx=12, majf=0, minf=0 00:26:12.187 IO depths : 1=0.1%, 2=12.0%, 4=60.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:12.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 issued rwts: total=9243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:12.187 filename0: (groupid=0, jobs=1): err= 0: pid=1710269: Wed Jul 24 20:54:07 2024 00:26:12.187 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:26:12.187 slat (nsec): min=6623, max=71907, avg=13047.48, stdev=5193.27 00:26:12.187 clat (usec): min=825, max=7891, avg=4302.21, stdev=700.73 00:26:12.187 lat (usec): min=839, max=7904, avg=4315.26, stdev=700.59 00:26:12.187 clat percentiles (usec): 00:26:12.187 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3654], 20.00th=[ 3949], 00:26:12.187 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:12.187 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5735], 00:26:12.187 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7570], 99.95th=[ 7701], 00:26:12.187 | 99.99th=[ 7898] 00:26:12.187 bw ( KiB/s): min=14272, max=15216, per=24.96%, avg=14716.40, stdev=314.71, samples=10 00:26:12.187 iops : min= 1784, max= 1902, avg=1839.50, stdev=39.39, samples=10 00:26:12.187 lat (usec) : 1000=0.03% 00:26:12.187 lat (msec) : 2=0.61%, 4=21.79%, 10=77.57% 00:26:12.187 cpu : usr=92.42%, sys=6.90%, ctx=12, majf=0, minf=9 00:26:12.187 IO depths : 1=0.1%, 2=13.0%, 4=59.6%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:12.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 issued rwts: total=9201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:12.187 filename1: (groupid=0, jobs=1): err= 0: pid=1710270: Wed Jul 24 20:54:07 2024 00:26:12.187 read: IOPS=1907, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5004msec) 00:26:12.187 slat (nsec): min=6198, max=53758, avg=11636.01, stdev=4471.63 00:26:12.187 clat (usec): min=1042, max=8024, avg=4155.90, stdev=609.35 00:26:12.187 lat (usec): min=1055, max=8037, avg=4167.54, stdev=609.53 00:26:12.187 clat percentiles (usec): 00:26:12.187 | 1.00th=[ 2638], 5.00th=[ 3130], 10.00th=[ 3425], 20.00th=[ 3752], 00:26:12.187 | 30.00th=[ 3982], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:12.187 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5014], 00:26:12.187 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7439], 99.95th=[ 7570], 00:26:12.187 | 99.99th=[ 8029] 00:26:12.187 bw ( KiB/s): min=14400, max=16016, per=25.89%, avg=15259.20, stdev=498.03, samples=10 00:26:12.187 iops : min= 1800, max= 2002, avg=1907.40, stdev=62.25, samples=10 00:26:12.187 lat (msec) : 2=0.23%, 4=29.95%, 10=69.82% 00:26:12.187 cpu : usr=91.80%, sys=7.56%, ctx=9, majf=0, minf=0 00:26:12.187 IO depths : 1=0.3%, 2=8.9%, 4=63.2%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:12.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 issued rwts: total=9545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:12.187 filename1: (groupid=0, jobs=1): err= 0: pid=1710271: Wed Jul 24 20:54:07 2024 00:26:12.187 read: IOPS=1776, BW=13.9MiB/s (14.6MB/s)(69.4MiB/5001msec) 00:26:12.187 slat (nsec): min=6773, max=59132, avg=12563.26, stdev=4657.57 00:26:12.187 clat (usec): min=769, max=8266, avg=4459.11, stdev=766.03 00:26:12.187 lat (usec): min=783, max=8279, avg=4471.68, stdev=765.35 00:26:12.187 clat percentiles (usec): 00:26:12.187 | 1.00th=[ 2868], 5.00th=[ 3687], 10.00th=[ 3884], 20.00th=[ 4080], 00:26:12.187 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:12.187 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5342], 95.00th=[ 6259], 00:26:12.187 | 99.00th=[ 7308], 99.50th=[ 7635], 99.90th=[ 7963], 99.95th=[ 8094], 00:26:12.187 | 99.99th=[ 8291] 00:26:12.187 bw ( KiB/s): min=13504, max=14832, per=24.04%, avg=14170.67, stdev=407.76, samples=9 00:26:12.187 iops : min= 1688, max= 1854, avg=1771.33, stdev=50.97, samples=9 00:26:12.187 lat (usec) : 1000=0.09% 00:26:12.187 lat (msec) : 2=0.43%, 4=13.93%, 10=85.56% 00:26:12.187 cpu : usr=92.60%, sys=6.78%, ctx=14, majf=0, minf=0 00:26:12.187 IO depths : 1=0.1%, 2=11.8%, 4=61.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:12.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.187 issued rwts: total=8883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:12.187 00:26:12.187 Run status group 0 (all jobs): 00:26:12.187 READ: bw=57.6MiB/s (60.4MB/s), 13.9MiB/s-14.9MiB/s (14.6MB/s-15.6MB/s), io=288MiB (302MB), run=5001-5004msec 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.187 00:26:12.187 real 0m24.513s 00:26:12.187 user 4m30.573s 00:26:12.187 sys 0m8.350s 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:12.187 20:54:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:12.187 ************************************ 00:26:12.187 END TEST fio_dif_rand_params 00:26:12.187 ************************************ 00:26:12.187 20:54:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:12.187 20:54:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:12.187 20:54:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:12.187 20:54:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:12.187 ************************************ 00:26:12.187 START TEST fio_dif_digest 00:26:12.187 ************************************ 00:26:12.187 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.188 bdev_null0 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.188 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.188 [2024-07-24 20:54:07.750666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:12.446 { 00:26:12.446 "params": { 00:26:12.446 "name": "Nvme$subsystem", 00:26:12.446 "trtype": "$TEST_TRANSPORT", 00:26:12.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.446 "adrfam": "ipv4", 00:26:12.446 "trsvcid": "$NVMF_PORT", 00:26:12.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.446 "hdgst": ${hdgst:-false}, 00:26:12.446 "ddgst": ${ddgst:-false} 00:26:12.446 }, 00:26:12.446 "method": "bdev_nvme_attach_controller" 00:26:12.446 } 00:26:12.446 EOF 00:26:12.446 )") 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:12.446 "params": { 00:26:12.446 "name": "Nvme0", 00:26:12.446 "trtype": "tcp", 00:26:12.446 "traddr": "10.0.0.2", 00:26:12.446 "adrfam": "ipv4", 00:26:12.446 "trsvcid": "4420", 00:26:12.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:12.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:12.446 "hdgst": true, 00:26:12.446 "ddgst": true 00:26:12.446 }, 00:26:12.446 "method": "bdev_nvme_attach_controller" 00:26:12.446 }' 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:12.446 20:54:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.446 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:12.446 ... 00:26:12.446 fio-3.35 00:26:12.446 Starting 3 threads 00:26:12.703 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.904 00:26:24.904 filename0: (groupid=0, jobs=1): err= 0: pid=1711383: Wed Jul 24 20:54:18 2024 00:26:24.904 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(241MiB/10044msec) 00:26:24.904 slat (nsec): min=7965, max=39898, avg=13472.86, stdev=2223.02 00:26:24.904 clat (usec): min=9711, max=53011, avg=15589.61, stdev=1726.60 00:26:24.904 lat (usec): min=9725, max=53024, avg=15603.08, stdev=1726.59 00:26:24.904 clat percentiles (usec): 00:26:24.904 | 1.00th=[11994], 5.00th=[13435], 10.00th=[14091], 20.00th=[14615], 00:26:24.904 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:26:24.904 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:26:24.904 | 99.00th=[18220], 99.50th=[19530], 99.90th=[50594], 99.95th=[53216], 00:26:24.904 | 99.99th=[53216] 00:26:24.904 bw ( KiB/s): min=23552, max=26112, per=32.83%, avg=24655.35, stdev=733.80, samples=20 00:26:24.904 iops : min= 184, max= 204, avg=192.60, stdev= 5.70, samples=20 00:26:24.904 lat (msec) : 10=0.05%, 20=99.48%, 50=0.36%, 100=0.10% 00:26:24.904 cpu : usr=91.62%, sys=7.94%, ctx=26, majf=0, minf=127 00:26:24.904 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:24.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.905 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:24.905 filename0: (groupid=0, jobs=1): err= 0: pid=1711384: Wed Jul 24 20:54:18 2024 00:26:24.905 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10048msec) 00:26:24.905 slat (nsec): min=7945, max=85337, avg=13517.42, stdev=2698.54 00:26:24.905 clat (usec): min=9249, max=52431, avg=14573.97, stdev=1562.76 00:26:24.905 lat (usec): min=9262, max=52444, avg=14587.49, stdev=1562.75 00:26:24.905 clat percentiles (usec): 00:26:24.905 | 1.00th=[11076], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:26:24.905 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:26:24.905 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15795], 95.00th=[16188], 00:26:24.905 | 99.00th=[17171], 99.50th=[17695], 99.90th=[20055], 99.95th=[48497], 00:26:24.905 | 99.99th=[52691] 00:26:24.905 bw ( KiB/s): min=25344, max=27392, per=35.11%, avg=26368.00, stdev=538.27, samples=20 00:26:24.905 iops : min= 198, max= 214, avg=206.00, stdev= 4.21, samples=20 00:26:24.905 lat (msec) : 10=0.29%, 20=99.52%, 50=0.15%, 100=0.05% 00:26:24.905 cpu : usr=91.36%, sys=8.18%, ctx=35, majf=0, minf=164 00:26:24.905 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:24.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.905 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:24.905 filename0: (groupid=0, jobs=1): err= 0: pid=1711385: Wed Jul 24 20:54:18 2024 00:26:24.905 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(238MiB/10045msec) 00:26:24.905 slat (nsec): min=8029, max=38497, avg=13786.40, stdev=2141.14 00:26:24.905 clat (usec): min=11551, max=58837, avg=15778.93, stdev=2863.24 00:26:24.905 lat (usec): min=11573, max=58852, avg=15792.72, stdev=2863.24 00:26:24.905 clat percentiles (usec): 00:26:24.905 | 1.00th=[12780], 5.00th=[13566], 10.00th=[14091], 20.00th=[14615], 00:26:24.905 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:26:24.905 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:26:24.905 | 99.00th=[18744], 99.50th=[19530], 99.90th=[58459], 99.95th=[58983], 00:26:24.905 | 99.99th=[58983] 00:26:24.905 bw ( KiB/s): min=22784, max=26368, per=32.43%, avg=24358.40, stdev=925.91, samples=20 00:26:24.905 iops : min= 178, max= 206, avg=190.30, stdev= 7.23, samples=20 00:26:24.905 lat (msec) : 20=99.53%, 50=0.10%, 100=0.37% 00:26:24.905 cpu : usr=92.46%, sys=7.07%, ctx=29, majf=0, minf=126 00:26:24.905 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:24.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.905 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:24.905 00:26:24.905 Run status group 0 (all jobs): 00:26:24.905 READ: bw=73.3MiB/s (76.9MB/s), 23.7MiB/s-25.7MiB/s (24.9MB/s-26.9MB/s), io=737MiB (773MB), run=10044-10048msec 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.905 00:26:24.905 real 0m11.161s 00:26:24.905 user 0m28.762s 00:26:24.905 sys 0m2.600s 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:24.905 20:54:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.905 ************************************ 00:26:24.905 END TEST fio_dif_digest 00:26:24.905 ************************************ 00:26:24.905 20:54:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:24.905 20:54:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:24.905 rmmod nvme_tcp 00:26:24.905 rmmod nvme_fabrics 00:26:24.905 rmmod nvme_keyring 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1704855 ']' 00:26:24.905 20:54:18 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1704855 00:26:24.905 20:54:18 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1704855 ']' 00:26:24.905 20:54:18 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1704855 00:26:24.905 20:54:18 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:26:24.905 20:54:18 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.905 20:54:18 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1704855 00:26:24.905 20:54:19 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:24.905 20:54:19 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:24.905 20:54:19 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1704855' 00:26:24.905 killing process with pid 1704855 00:26:24.905 20:54:19 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1704855 00:26:24.905 20:54:19 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1704855 00:26:24.905 20:54:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:24.905 20:54:19 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:24.905 Waiting for block devices as requested 00:26:24.905 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:25.163 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:25.163 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:25.163 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:25.420 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:25.420 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:25.420 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:25.420 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:25.679 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:25.679 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:25.679 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:25.679 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:25.937 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:25.937 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:25.937 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:25.937 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:26.195 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:26.195 20:54:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:26.195 20:54:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:26.195 20:54:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.195 20:54:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.195 20:54:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.195 20:54:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:26.195 20:54:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.724 20:54:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.724 00:26:28.724 real 1m7.056s 00:26:28.724 user 6m27.030s 00:26:28.724 sys 0m20.374s 00:26:28.724 20:54:23 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:28.724 20:54:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:28.724 ************************************ 00:26:28.724 END TEST nvmf_dif 00:26:28.724 ************************************ 00:26:28.724 20:54:23 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:28.724 20:54:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:28.724 20:54:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:28.724 20:54:23 -- common/autotest_common.sh@10 -- # set +x 00:26:28.724 ************************************ 00:26:28.724 START TEST nvmf_abort_qd_sizes 00:26:28.724 ************************************ 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:28.724 * Looking for test storage... 00:26:28.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:28.724 20:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:30.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:30.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:30.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:30.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:30.623 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:30.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:26:30.624 00:26:30.624 --- 10.0.0.2 ping statistics --- 00:26:30.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.624 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:26:30.624 00:26:30.624 --- 10.0.0.1 ping statistics --- 00:26:30.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.624 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:30.624 20:54:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:31.557 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:31.557 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:31.557 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:31.557 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:31.557 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:31.557 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:31.557 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:31.815 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:31.815 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:32.749 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:32.749 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.749 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:32.749 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:32.749 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.749 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:32.749 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1716434 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1716434 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1716434 ']' 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.750 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:32.750 [2024-07-24 20:54:28.305820] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:26:32.750 [2024-07-24 20:54:28.305907] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.008 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.008 [2024-07-24 20:54:28.370308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.008 [2024-07-24 20:54:28.479765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.008 [2024-07-24 20:54:28.479815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.008 [2024-07-24 20:54:28.479843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.008 [2024-07-24 20:54:28.479854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.008 [2024-07-24 20:54:28.479863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.008 [2024-07-24 20:54:28.479946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.008 [2024-07-24 20:54:28.480013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.008 [2024-07-24 20:54:28.480077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.008 [2024-07-24 20:54:28.480081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:33.265 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.266 20:54:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:33.266 ************************************ 00:26:33.266 START TEST spdk_target_abort 00:26:33.266 ************************************ 00:26:33.266 20:54:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:26:33.266 20:54:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:33.266 20:54:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:33.266 20:54:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.266 20:54:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.579 spdk_targetn1 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.579 [2024-07-24 20:54:31.492558] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.579 [2024-07-24 20:54:31.524798] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:36.579 20:54:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:36.579 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.102 Initializing NVMe Controllers 00:26:39.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:39.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:39.102 Initialization complete. Launching workers. 00:26:39.102 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12353, failed: 0 00:26:39.102 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1327, failed to submit 11026 00:26:39.102 success 802, unsuccess 525, failed 0 00:26:39.102 20:54:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:39.102 20:54:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:39.360 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.638 Initializing NVMe Controllers 00:26:42.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:42.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:42.638 Initialization complete. Launching workers. 00:26:42.638 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8643, failed: 0 00:26:42.638 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7418 00:26:42.638 success 326, unsuccess 899, failed 0 00:26:42.638 20:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:42.638 20:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:42.638 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.913 Initializing NVMe Controllers 00:26:45.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:45.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:45.913 Initialization complete. Launching workers. 00:26:45.913 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31519, failed: 0 00:26:45.913 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2834, failed to submit 28685 00:26:45.913 success 518, unsuccess 2316, failed 0 00:26:45.913 20:54:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:45.913 20:54:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.913 20:54:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.913 20:54:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.913 20:54:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:45.913 20:54:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.913 20:54:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.844 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.844 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1716434 00:26:46.844 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1716434 ']' 00:26:46.844 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1716434 00:26:46.844 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:26:46.844 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.844 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1716434 00:26:47.102 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:47.102 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:47.102 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1716434' 00:26:47.102 killing process with pid 1716434 00:26:47.102 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1716434 00:26:47.102 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1716434 00:26:47.360 00:26:47.360 real 0m14.055s 00:26:47.360 user 0m53.256s 00:26:47.360 sys 0m2.469s 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:47.360 ************************************ 00:26:47.360 END TEST spdk_target_abort 00:26:47.360 ************************************ 00:26:47.360 20:54:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:47.360 20:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:47.360 20:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.360 20:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:47.360 ************************************ 00:26:47.360 START TEST kernel_target_abort 00:26:47.360 ************************************ 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:47.360 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:47.361 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:47.361 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:47.361 20:54:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:48.300 Waiting for block devices as requested 00:26:48.556 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:48.556 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:48.556 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:48.814 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:48.814 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:48.814 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:49.071 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:49.072 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:49.072 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:49.072 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:49.330 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:49.330 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:49.330 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:49.330 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:49.588 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:49.588 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:49.588 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:49.846 No valid GPT data, bailing 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:26:49.846 00:26:49.846 Discovery Log Number of Records 2, Generation counter 2 00:26:49.846 =====Discovery Log Entry 0====== 00:26:49.846 trtype: tcp 00:26:49.846 adrfam: ipv4 00:26:49.846 subtype: current discovery subsystem 00:26:49.846 treq: not specified, sq flow control disable supported 00:26:49.846 portid: 1 00:26:49.846 trsvcid: 4420 00:26:49.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:49.846 traddr: 10.0.0.1 00:26:49.846 eflags: none 00:26:49.846 sectype: none 00:26:49.846 =====Discovery Log Entry 1====== 00:26:49.846 trtype: tcp 00:26:49.846 adrfam: ipv4 00:26:49.846 subtype: nvme subsystem 00:26:49.846 treq: not specified, sq flow control disable supported 00:26:49.846 portid: 1 00:26:49.846 trsvcid: 4420 00:26:49.846 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:49.846 traddr: 10.0.0.1 00:26:49.846 eflags: none 00:26:49.846 sectype: none 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.846 20:54:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:49.846 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.125 Initializing NVMe Controllers 00:26:53.125 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:53.125 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:53.125 Initialization complete. Launching workers. 00:26:53.125 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40579, failed: 0 00:26:53.125 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40579, failed to submit 0 00:26:53.125 success 0, unsuccess 40579, failed 0 00:26:53.125 20:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.125 20:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:53.125 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.407 Initializing NVMe Controllers 00:26:56.407 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:56.407 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:56.407 Initialization complete. Launching workers. 00:26:56.407 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71486, failed: 0 00:26:56.407 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18034, failed to submit 53452 00:26:56.407 success 0, unsuccess 18034, failed 0 00:26:56.407 20:54:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:56.407 20:54:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.407 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.737 Initializing NVMe Controllers 00:26:59.737 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:59.737 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:59.737 Initialization complete. Launching workers. 00:26:59.737 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69442, failed: 0 00:26:59.737 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17358, failed to submit 52084 00:26:59.737 success 0, unsuccess 17358, failed 0 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:59.737 20:54:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:00.303 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:00.303 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:00.303 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:00.303 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:00.303 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:00.566 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:00.566 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:00.566 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:00.566 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:00.566 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:00.566 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:00.567 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:00.567 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:00.567 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:00.567 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:00.567 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:01.500 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:01.500 00:27:01.500 real 0m14.231s 00:27:01.500 user 0m5.815s 00:27:01.500 sys 0m3.235s 00:27:01.500 20:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.500 20:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.500 ************************************ 00:27:01.500 END TEST kernel_target_abort 00:27:01.500 ************************************ 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.500 rmmod nvme_tcp 00:27:01.500 rmmod nvme_fabrics 00:27:01.500 rmmod nvme_keyring 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1716434 ']' 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1716434 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1716434 ']' 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1716434 00:27:01.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1716434) - No such process 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1716434 is not found' 00:27:01.500 Process with pid 1716434 is not found 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:01.500 20:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:02.874 Waiting for block devices as requested 00:27:02.874 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:02.874 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:03.132 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:03.132 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:03.132 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:03.132 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:03.390 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:03.390 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:03.390 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:03.390 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:03.648 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:03.648 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:03.648 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:03.648 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:03.906 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:03.906 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:03.906 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:03.906 20:54:59 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.906 20:54:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.906 20:54:59 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.906 20:54:59 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.906 20:54:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.906 20:54:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:03.906 20:54:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.435 20:55:01 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.435 00:27:06.435 real 0m37.732s 00:27:06.435 user 1m1.227s 00:27:06.435 sys 0m9.100s 00:27:06.435 20:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.435 20:55:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:06.435 ************************************ 00:27:06.435 END TEST nvmf_abort_qd_sizes 00:27:06.435 ************************************ 00:27:06.435 20:55:01 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:06.435 20:55:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:06.435 20:55:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.435 20:55:01 -- common/autotest_common.sh@10 -- # set +x 00:27:06.435 ************************************ 00:27:06.435 START TEST keyring_file 00:27:06.435 ************************************ 00:27:06.435 20:55:01 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:06.435 * Looking for test storage... 00:27:06.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.435 20:55:01 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.435 20:55:01 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.435 20:55:01 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.435 20:55:01 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.435 20:55:01 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.435 20:55:01 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.435 20:55:01 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:06.435 20:55:01 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JdORznuXis 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JdORznuXis 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JdORznuXis 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.JdORznuXis 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.230b7JDLUZ 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:06.435 20:55:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.230b7JDLUZ 00:27:06.435 20:55:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.230b7JDLUZ 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.230b7JDLUZ 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@30 -- # tgtpid=1722199 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:06.435 20:55:01 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1722199 00:27:06.435 20:55:01 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1722199 ']' 00:27:06.435 20:55:01 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.435 20:55:01 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.436 20:55:01 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.436 20:55:01 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.436 20:55:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:06.436 [2024-07-24 20:55:01.741688] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:27:06.436 [2024-07-24 20:55:01.741787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722199 ] 00:27:06.436 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.436 [2024-07-24 20:55:01.805527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.436 [2024-07-24 20:55:01.924744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:06.694 20:55:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:06.694 [2024-07-24 20:55:02.194470] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.694 null0 00:27:06.694 [2024-07-24 20:55:02.226510] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:06.694 [2024-07-24 20:55:02.227035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:06.694 [2024-07-24 20:55:02.234511] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.694 20:55:02 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:06.694 [2024-07-24 20:55:02.246550] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:06.694 request: 00:27:06.694 { 00:27:06.694 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.694 "secure_channel": false, 00:27:06.694 "listen_address": { 00:27:06.694 "trtype": "tcp", 00:27:06.694 "traddr": "127.0.0.1", 00:27:06.694 "trsvcid": "4420" 00:27:06.694 }, 00:27:06.694 "method": "nvmf_subsystem_add_listener", 00:27:06.694 "req_id": 1 00:27:06.694 } 00:27:06.694 Got JSON-RPC error response 00:27:06.694 response: 00:27:06.694 { 00:27:06.694 "code": -32602, 00:27:06.694 "message": "Invalid parameters" 00:27:06.694 } 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:06.694 20:55:02 keyring_file -- keyring/file.sh@46 -- # bperfpid=1722210 00:27:06.694 20:55:02 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:06.694 20:55:02 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1722210 /var/tmp/bperf.sock 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1722210 ']' 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.694 20:55:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:06.952 [2024-07-24 20:55:02.295048] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:27:06.952 [2024-07-24 20:55:02.295127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722210 ] 00:27:06.952 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.952 [2024-07-24 20:55:02.355326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.952 [2024-07-24 20:55:02.473109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.210 20:55:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.210 20:55:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:07.210 20:55:02 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:07.210 20:55:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:07.467 20:55:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.230b7JDLUZ 00:27:07.467 20:55:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.230b7JDLUZ 00:27:07.724 20:55:03 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:07.724 20:55:03 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:07.724 20:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:07.724 20:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.724 20:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:07.980 20:55:03 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.JdORznuXis == \/\t\m\p\/\t\m\p\.\J\d\O\R\z\n\u\X\i\s ]] 00:27:07.980 20:55:03 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:07.980 20:55:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:07.980 20:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:07.980 20:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.980 20:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:08.236 20:55:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.230b7JDLUZ == \/\t\m\p\/\t\m\p\.\2\3\0\b\7\J\D\L\U\Z ]] 00:27:08.236 20:55:03 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:08.236 20:55:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:08.236 20:55:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.236 20:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.237 20:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.237 20:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:08.494 20:55:03 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:08.494 20:55:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:08.494 20:55:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:08.494 20:55:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.494 20:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.494 20:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.494 20:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:08.751 20:55:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:08.752 20:55:04 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:08.752 20:55:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:09.009 [2024-07-24 20:55:04.319758] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:09.009 nvme0n1 00:27:09.009 20:55:04 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:09.009 20:55:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:09.009 20:55:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:09.009 20:55:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:09.009 20:55:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:09.009 20:55:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.266 20:55:04 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:09.266 20:55:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:09.266 20:55:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:09.266 20:55:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:09.266 20:55:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:09.266 20:55:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.266 20:55:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:09.524 20:55:04 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:09.524 20:55:04 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:09.524 Running I/O for 1 seconds... 00:27:10.897 00:27:10.897 Latency(us) 00:27:10.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.897 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:10.897 nvme0n1 : 1.01 6808.76 26.60 0.00 0.00 18686.15 5995.33 28738.75 00:27:10.897 =================================================================================================================== 00:27:10.897 Total : 6808.76 26.60 0.00 0.00 18686.15 5995.33 28738.75 00:27:10.897 0 00:27:10.897 20:55:06 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:10.897 20:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:10.897 20:55:06 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:10.897 20:55:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:10.897 20:55:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:10.898 20:55:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:10.898 20:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:10.898 20:55:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:11.155 20:55:06 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:11.155 20:55:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:11.155 20:55:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:11.155 20:55:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:11.155 20:55:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:11.155 20:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:11.155 20:55:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:11.413 20:55:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:11.413 20:55:06 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:11.413 20:55:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:11.413 20:55:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:11.413 20:55:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:11.413 20:55:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.413 20:55:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:11.413 20:55:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.413 20:55:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:11.413 20:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:11.671 [2024-07-24 20:55:07.037794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:11.671 [2024-07-24 20:55:07.037998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f429a0 (107): Transport endpoint is not connected 00:27:11.671 [2024-07-24 20:55:07.038990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f429a0 (9): Bad file descriptor 00:27:11.671 [2024-07-24 20:55:07.039987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:11.671 [2024-07-24 20:55:07.040009] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:11.671 [2024-07-24 20:55:07.040024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:11.671 request: 00:27:11.671 { 00:27:11.671 "name": "nvme0", 00:27:11.671 "trtype": "tcp", 00:27:11.671 "traddr": "127.0.0.1", 00:27:11.671 "adrfam": "ipv4", 00:27:11.671 "trsvcid": "4420", 00:27:11.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:11.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:11.671 "prchk_reftag": false, 00:27:11.671 "prchk_guard": false, 00:27:11.671 "hdgst": false, 00:27:11.671 "ddgst": false, 00:27:11.671 "psk": "key1", 00:27:11.671 "method": "bdev_nvme_attach_controller", 00:27:11.671 "req_id": 1 00:27:11.671 } 00:27:11.671 Got JSON-RPC error response 00:27:11.671 response: 00:27:11.671 { 00:27:11.671 "code": -5, 00:27:11.671 "message": "Input/output error" 00:27:11.671 } 00:27:11.671 20:55:07 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:11.671 20:55:07 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:11.671 20:55:07 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:11.671 20:55:07 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:11.671 20:55:07 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:11.671 20:55:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:11.671 20:55:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:11.671 20:55:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:11.671 20:55:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:11.671 20:55:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:11.929 20:55:07 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:11.929 20:55:07 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:11.929 20:55:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:11.929 20:55:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:11.929 20:55:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:11.929 20:55:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:11.929 20:55:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:12.187 20:55:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:12.187 20:55:07 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:12.187 20:55:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:12.444 20:55:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:12.444 20:55:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:12.702 20:55:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:12.702 20:55:08 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:12.702 20:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.959 20:55:08 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:12.959 20:55:08 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.JdORznuXis 00:27:12.959 20:55:08 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:12.959 20:55:08 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:12.959 20:55:08 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:12.959 20:55:08 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:12.959 20:55:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:12.960 20:55:08 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:12.960 20:55:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:12.960 20:55:08 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:12.960 20:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:12.960 [2024-07-24 20:55:08.517061] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JdORznuXis': 0100660 00:27:12.960 [2024-07-24 20:55:08.517100] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:12.960 request: 00:27:12.960 { 00:27:12.960 "name": "key0", 00:27:12.960 "path": "/tmp/tmp.JdORznuXis", 00:27:12.960 "method": "keyring_file_add_key", 00:27:12.960 "req_id": 1 00:27:12.960 } 00:27:12.960 Got JSON-RPC error response 00:27:12.960 response: 00:27:12.960 { 00:27:12.960 "code": -1, 00:27:12.960 "message": "Operation not permitted" 00:27:12.960 } 00:27:13.217 20:55:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:13.217 20:55:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:13.217 20:55:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:13.217 20:55:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:13.218 20:55:08 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.JdORznuXis 00:27:13.218 20:55:08 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:13.218 20:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JdORznuXis 00:27:13.218 20:55:08 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.JdORznuXis 00:27:13.218 20:55:08 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:13.218 20:55:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:13.218 20:55:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:13.218 20:55:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:13.218 20:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:13.218 20:55:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:13.475 20:55:09 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:13.475 20:55:09 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.475 20:55:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:13.475 20:55:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.475 20:55:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:13.475 20:55:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.475 20:55:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:13.475 20:55:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.475 20:55:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.475 20:55:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.733 [2024-07-24 20:55:09.247082] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.JdORznuXis': No such file or directory 00:27:13.733 [2024-07-24 20:55:09.247119] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:13.733 [2024-07-24 20:55:09.247149] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:13.733 [2024-07-24 20:55:09.247161] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:13.733 [2024-07-24 20:55:09.247174] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:13.733 request: 00:27:13.733 { 00:27:13.733 "name": "nvme0", 00:27:13.733 "trtype": "tcp", 00:27:13.733 "traddr": "127.0.0.1", 00:27:13.733 "adrfam": "ipv4", 00:27:13.733 "trsvcid": "4420", 00:27:13.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:13.733 "prchk_reftag": false, 00:27:13.733 "prchk_guard": false, 00:27:13.733 "hdgst": false, 00:27:13.733 "ddgst": false, 00:27:13.733 "psk": "key0", 00:27:13.733 "method": "bdev_nvme_attach_controller", 00:27:13.733 "req_id": 1 00:27:13.733 } 00:27:13.733 Got JSON-RPC error response 00:27:13.733 response: 00:27:13.733 { 00:27:13.733 "code": -19, 00:27:13.733 "message": "No such device" 00:27:13.733 } 00:27:13.733 20:55:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:13.733 20:55:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:13.733 20:55:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:13.733 20:55:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:13.733 20:55:09 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:13.733 20:55:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:13.991 20:55:09 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:13.991 20:55:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:13.991 20:55:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:13.991 20:55:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:13.991 20:55:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:13.991 20:55:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:13.991 20:55:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.C0KjtJng0G 00:27:13.991 20:55:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:13.991 20:55:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:13.991 20:55:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.991 20:55:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:13.991 20:55:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:13.991 20:55:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:13.991 20:55:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:14.249 20:55:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.C0KjtJng0G 00:27:14.249 20:55:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.C0KjtJng0G 00:27:14.249 20:55:09 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.C0KjtJng0G 00:27:14.249 20:55:09 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C0KjtJng0G 00:27:14.249 20:55:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C0KjtJng0G 00:27:14.249 20:55:09 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:14.249 20:55:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:14.814 nvme0n1 00:27:14.814 20:55:10 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:14.814 20:55:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:14.814 20:55:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:14.814 20:55:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:14.814 20:55:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:14.814 20:55:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:14.814 20:55:10 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:14.814 20:55:10 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:14.814 20:55:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:15.071 20:55:10 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:15.071 20:55:10 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:15.071 20:55:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.071 20:55:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.071 20:55:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:15.328 20:55:10 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:15.328 20:55:10 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:15.328 20:55:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:15.328 20:55:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.328 20:55:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.328 20:55:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.328 20:55:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:15.585 20:55:11 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:15.585 20:55:11 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:15.585 20:55:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:15.842 20:55:11 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:15.842 20:55:11 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:15.842 20:55:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:16.126 20:55:11 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:16.126 20:55:11 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C0KjtJng0G 00:27:16.126 20:55:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C0KjtJng0G 00:27:16.384 20:55:11 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.230b7JDLUZ 00:27:16.384 20:55:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.230b7JDLUZ 00:27:16.641 20:55:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:16.641 20:55:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:16.899 nvme0n1 00:27:16.899 20:55:12 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:16.899 20:55:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:17.156 20:55:12 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:17.156 "subsystems": [ 00:27:17.156 { 00:27:17.156 "subsystem": "keyring", 00:27:17.156 "config": [ 00:27:17.156 { 00:27:17.156 "method": "keyring_file_add_key", 00:27:17.156 "params": { 00:27:17.156 "name": "key0", 00:27:17.156 "path": "/tmp/tmp.C0KjtJng0G" 00:27:17.156 } 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "method": "keyring_file_add_key", 00:27:17.156 "params": { 00:27:17.156 "name": "key1", 00:27:17.156 "path": "/tmp/tmp.230b7JDLUZ" 00:27:17.156 } 00:27:17.156 } 00:27:17.156 ] 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "subsystem": "iobuf", 00:27:17.156 "config": [ 00:27:17.156 { 00:27:17.156 "method": "iobuf_set_options", 00:27:17.156 "params": { 00:27:17.156 "small_pool_count": 8192, 00:27:17.156 "large_pool_count": 1024, 00:27:17.156 "small_bufsize": 8192, 00:27:17.156 "large_bufsize": 135168 00:27:17.156 } 00:27:17.156 } 00:27:17.156 ] 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "subsystem": "sock", 00:27:17.156 "config": [ 00:27:17.156 { 00:27:17.156 "method": "sock_set_default_impl", 00:27:17.156 "params": { 00:27:17.156 "impl_name": "posix" 00:27:17.156 } 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "method": "sock_impl_set_options", 00:27:17.156 "params": { 00:27:17.156 "impl_name": "ssl", 00:27:17.156 "recv_buf_size": 4096, 00:27:17.156 "send_buf_size": 4096, 00:27:17.156 "enable_recv_pipe": true, 00:27:17.156 "enable_quickack": false, 00:27:17.156 "enable_placement_id": 0, 00:27:17.156 "enable_zerocopy_send_server": true, 00:27:17.156 "enable_zerocopy_send_client": false, 00:27:17.156 "zerocopy_threshold": 0, 00:27:17.156 "tls_version": 0, 00:27:17.156 "enable_ktls": false 00:27:17.156 } 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "method": "sock_impl_set_options", 00:27:17.156 "params": { 00:27:17.156 "impl_name": "posix", 00:27:17.156 "recv_buf_size": 2097152, 00:27:17.156 "send_buf_size": 2097152, 00:27:17.156 "enable_recv_pipe": true, 00:27:17.156 "enable_quickack": false, 00:27:17.156 "enable_placement_id": 0, 00:27:17.156 "enable_zerocopy_send_server": true, 00:27:17.156 "enable_zerocopy_send_client": false, 00:27:17.156 "zerocopy_threshold": 0, 00:27:17.156 "tls_version": 0, 00:27:17.156 "enable_ktls": false 00:27:17.156 } 00:27:17.156 } 00:27:17.156 ] 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "subsystem": "vmd", 00:27:17.156 "config": [] 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "subsystem": "accel", 00:27:17.156 "config": [ 00:27:17.156 { 00:27:17.156 "method": "accel_set_options", 00:27:17.156 "params": { 00:27:17.156 "small_cache_size": 128, 00:27:17.156 "large_cache_size": 16, 00:27:17.156 "task_count": 2048, 00:27:17.156 "sequence_count": 2048, 00:27:17.156 "buf_count": 2048 00:27:17.156 } 00:27:17.156 } 00:27:17.156 ] 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "subsystem": "bdev", 00:27:17.156 "config": [ 00:27:17.156 { 00:27:17.156 "method": "bdev_set_options", 00:27:17.156 "params": { 00:27:17.156 "bdev_io_pool_size": 65535, 00:27:17.156 "bdev_io_cache_size": 256, 00:27:17.156 "bdev_auto_examine": true, 00:27:17.156 "iobuf_small_cache_size": 128, 00:27:17.156 "iobuf_large_cache_size": 16 00:27:17.156 } 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "method": "bdev_raid_set_options", 00:27:17.156 "params": { 00:27:17.156 "process_window_size_kb": 1024, 00:27:17.156 "process_max_bandwidth_mb_sec": 0 00:27:17.156 } 00:27:17.156 }, 00:27:17.156 { 00:27:17.156 "method": "bdev_iscsi_set_options", 00:27:17.156 "params": { 00:27:17.156 "timeout_sec": 30 00:27:17.157 } 00:27:17.157 }, 00:27:17.157 { 00:27:17.157 "method": "bdev_nvme_set_options", 00:27:17.157 "params": { 00:27:17.157 "action_on_timeout": "none", 00:27:17.157 "timeout_us": 0, 00:27:17.157 "timeout_admin_us": 0, 00:27:17.157 "keep_alive_timeout_ms": 10000, 00:27:17.157 "arbitration_burst": 0, 00:27:17.157 "low_priority_weight": 0, 00:27:17.157 "medium_priority_weight": 0, 00:27:17.157 "high_priority_weight": 0, 00:27:17.157 "nvme_adminq_poll_period_us": 10000, 00:27:17.157 "nvme_ioq_poll_period_us": 0, 00:27:17.157 "io_queue_requests": 512, 00:27:17.157 "delay_cmd_submit": true, 00:27:17.157 "transport_retry_count": 4, 00:27:17.157 "bdev_retry_count": 3, 00:27:17.157 "transport_ack_timeout": 0, 00:27:17.157 "ctrlr_loss_timeout_sec": 0, 00:27:17.157 "reconnect_delay_sec": 0, 00:27:17.157 "fast_io_fail_timeout_sec": 0, 00:27:17.157 "disable_auto_failback": false, 00:27:17.157 "generate_uuids": false, 00:27:17.157 "transport_tos": 0, 00:27:17.157 "nvme_error_stat": false, 00:27:17.157 "rdma_srq_size": 0, 00:27:17.157 "io_path_stat": false, 00:27:17.157 "allow_accel_sequence": false, 00:27:17.157 "rdma_max_cq_size": 0, 00:27:17.157 "rdma_cm_event_timeout_ms": 0, 00:27:17.157 "dhchap_digests": [ 00:27:17.157 "sha256", 00:27:17.157 "sha384", 00:27:17.157 "sha512" 00:27:17.157 ], 00:27:17.157 "dhchap_dhgroups": [ 00:27:17.157 "null", 00:27:17.157 "ffdhe2048", 00:27:17.157 "ffdhe3072", 00:27:17.157 "ffdhe4096", 00:27:17.157 "ffdhe6144", 00:27:17.157 "ffdhe8192" 00:27:17.157 ] 00:27:17.157 } 00:27:17.157 }, 00:27:17.157 { 00:27:17.157 "method": "bdev_nvme_attach_controller", 00:27:17.157 "params": { 00:27:17.157 "name": "nvme0", 00:27:17.157 "trtype": "TCP", 00:27:17.157 "adrfam": "IPv4", 00:27:17.157 "traddr": "127.0.0.1", 00:27:17.157 "trsvcid": "4420", 00:27:17.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:17.157 "prchk_reftag": false, 00:27:17.157 "prchk_guard": false, 00:27:17.157 "ctrlr_loss_timeout_sec": 0, 00:27:17.157 "reconnect_delay_sec": 0, 00:27:17.157 "fast_io_fail_timeout_sec": 0, 00:27:17.157 "psk": "key0", 00:27:17.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:17.157 "hdgst": false, 00:27:17.157 "ddgst": false 00:27:17.157 } 00:27:17.157 }, 00:27:17.157 { 00:27:17.157 "method": "bdev_nvme_set_hotplug", 00:27:17.157 "params": { 00:27:17.157 "period_us": 100000, 00:27:17.157 "enable": false 00:27:17.157 } 00:27:17.157 }, 00:27:17.157 { 00:27:17.157 "method": "bdev_wait_for_examine" 00:27:17.157 } 00:27:17.157 ] 00:27:17.157 }, 00:27:17.157 { 00:27:17.157 "subsystem": "nbd", 00:27:17.157 "config": [] 00:27:17.157 } 00:27:17.157 ] 00:27:17.157 }' 00:27:17.157 20:55:12 keyring_file -- keyring/file.sh@114 -- # killprocess 1722210 00:27:17.157 20:55:12 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1722210 ']' 00:27:17.157 20:55:12 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1722210 00:27:17.157 20:55:12 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:17.157 20:55:12 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:17.157 20:55:12 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1722210 00:27:17.415 20:55:12 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:17.415 20:55:12 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:17.415 20:55:12 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1722210' 00:27:17.415 killing process with pid 1722210 00:27:17.415 20:55:12 keyring_file -- common/autotest_common.sh@969 -- # kill 1722210 00:27:17.415 Received shutdown signal, test time was about 1.000000 seconds 00:27:17.415 00:27:17.415 Latency(us) 00:27:17.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.415 =================================================================================================================== 00:27:17.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.415 20:55:12 keyring_file -- common/autotest_common.sh@974 -- # wait 1722210 00:27:17.673 20:55:12 keyring_file -- keyring/file.sh@117 -- # bperfpid=1723673 00:27:17.673 20:55:12 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1723673 /var/tmp/bperf.sock 00:27:17.673 20:55:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1723673 ']' 00:27:17.673 20:55:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.673 20:55:12 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:17.673 20:55:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:17.673 20:55:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.673 20:55:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:17.673 20:55:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:17.673 20:55:12 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:17.673 "subsystems": [ 00:27:17.673 { 00:27:17.673 "subsystem": "keyring", 00:27:17.673 "config": [ 00:27:17.673 { 00:27:17.673 "method": "keyring_file_add_key", 00:27:17.673 "params": { 00:27:17.673 "name": "key0", 00:27:17.673 "path": "/tmp/tmp.C0KjtJng0G" 00:27:17.673 } 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "method": "keyring_file_add_key", 00:27:17.673 "params": { 00:27:17.673 "name": "key1", 00:27:17.673 "path": "/tmp/tmp.230b7JDLUZ" 00:27:17.673 } 00:27:17.673 } 00:27:17.673 ] 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "subsystem": "iobuf", 00:27:17.673 "config": [ 00:27:17.673 { 00:27:17.673 "method": "iobuf_set_options", 00:27:17.673 "params": { 00:27:17.673 "small_pool_count": 8192, 00:27:17.673 "large_pool_count": 1024, 00:27:17.673 "small_bufsize": 8192, 00:27:17.673 "large_bufsize": 135168 00:27:17.673 } 00:27:17.673 } 00:27:17.673 ] 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "subsystem": "sock", 00:27:17.673 "config": [ 00:27:17.673 { 00:27:17.673 "method": "sock_set_default_impl", 00:27:17.673 "params": { 00:27:17.673 "impl_name": "posix" 00:27:17.673 } 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "method": "sock_impl_set_options", 00:27:17.673 "params": { 00:27:17.673 "impl_name": "ssl", 00:27:17.673 "recv_buf_size": 4096, 00:27:17.673 "send_buf_size": 4096, 00:27:17.673 "enable_recv_pipe": true, 00:27:17.673 "enable_quickack": false, 00:27:17.673 "enable_placement_id": 0, 00:27:17.673 "enable_zerocopy_send_server": true, 00:27:17.673 "enable_zerocopy_send_client": false, 00:27:17.673 "zerocopy_threshold": 0, 00:27:17.673 "tls_version": 0, 00:27:17.673 "enable_ktls": false 00:27:17.673 } 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "method": "sock_impl_set_options", 00:27:17.673 "params": { 00:27:17.673 "impl_name": "posix", 00:27:17.673 "recv_buf_size": 2097152, 00:27:17.673 "send_buf_size": 2097152, 00:27:17.673 "enable_recv_pipe": true, 00:27:17.673 "enable_quickack": false, 00:27:17.673 "enable_placement_id": 0, 00:27:17.673 "enable_zerocopy_send_server": true, 00:27:17.673 "enable_zerocopy_send_client": false, 00:27:17.673 "zerocopy_threshold": 0, 00:27:17.673 "tls_version": 0, 00:27:17.673 "enable_ktls": false 00:27:17.673 } 00:27:17.673 } 00:27:17.673 ] 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "subsystem": "vmd", 00:27:17.673 "config": [] 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "subsystem": "accel", 00:27:17.673 "config": [ 00:27:17.673 { 00:27:17.673 "method": "accel_set_options", 00:27:17.673 "params": { 00:27:17.673 "small_cache_size": 128, 00:27:17.673 "large_cache_size": 16, 00:27:17.673 "task_count": 2048, 00:27:17.673 "sequence_count": 2048, 00:27:17.673 "buf_count": 2048 00:27:17.673 } 00:27:17.673 } 00:27:17.673 ] 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "subsystem": "bdev", 00:27:17.673 "config": [ 00:27:17.673 { 00:27:17.673 "method": "bdev_set_options", 00:27:17.673 "params": { 00:27:17.673 "bdev_io_pool_size": 65535, 00:27:17.673 "bdev_io_cache_size": 256, 00:27:17.673 "bdev_auto_examine": true, 00:27:17.673 "iobuf_small_cache_size": 128, 00:27:17.673 "iobuf_large_cache_size": 16 00:27:17.673 } 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "method": "bdev_raid_set_options", 00:27:17.673 "params": { 00:27:17.673 "process_window_size_kb": 1024, 00:27:17.673 "process_max_bandwidth_mb_sec": 0 00:27:17.673 } 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "method": "bdev_iscsi_set_options", 00:27:17.673 "params": { 00:27:17.673 "timeout_sec": 30 00:27:17.673 } 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "method": "bdev_nvme_set_options", 00:27:17.673 "params": { 00:27:17.673 "action_on_timeout": "none", 00:27:17.673 "timeout_us": 0, 00:27:17.673 "timeout_admin_us": 0, 00:27:17.673 "keep_alive_timeout_ms": 10000, 00:27:17.673 "arbitration_burst": 0, 00:27:17.673 "low_priority_weight": 0, 00:27:17.673 "medium_priority_weight": 0, 00:27:17.673 "high_priority_weight": 0, 00:27:17.673 "nvme_adminq_poll_period_us": 10000, 00:27:17.673 "nvme_ioq_poll_period_us": 0, 00:27:17.673 "io_queue_requests": 512, 00:27:17.673 "delay_cmd_submit": true, 00:27:17.673 "transport_retry_count": 4, 00:27:17.673 "bdev_retry_count": 3, 00:27:17.673 "transport_ack_timeout": 0, 00:27:17.673 "ctrlr_loss_timeout_sec": 0, 00:27:17.673 "reconnect_delay_sec": 0, 00:27:17.673 "fast_io_fail_timeout_sec": 0, 00:27:17.673 "disable_auto_failback": false, 00:27:17.673 "generate_uuids": false, 00:27:17.673 "transport_tos": 0, 00:27:17.673 "nvme_error_stat": false, 00:27:17.673 "rdma_srq_size": 0, 00:27:17.673 "io_path_stat": false, 00:27:17.673 "allow_accel_sequence": false, 00:27:17.673 "rdma_max_cq_size": 0, 00:27:17.673 "rdma_cm_event_timeout_ms": 0, 00:27:17.673 "dhchap_digests": [ 00:27:17.673 "sha256", 00:27:17.673 "sha384", 00:27:17.673 "sha512" 00:27:17.673 ], 00:27:17.673 "dhchap_dhgroups": [ 00:27:17.673 "null", 00:27:17.673 "ffdhe2048", 00:27:17.673 "ffdhe3072", 00:27:17.673 "ffdhe4096", 00:27:17.673 "ffdhe6144", 00:27:17.673 "ffdhe8192" 00:27:17.673 ] 00:27:17.673 } 00:27:17.673 }, 00:27:17.673 { 00:27:17.673 "method": "bdev_nvme_attach_controller", 00:27:17.674 "params": { 00:27:17.674 "name": "nvme0", 00:27:17.674 "trtype": "TCP", 00:27:17.674 "adrfam": "IPv4", 00:27:17.674 "traddr": "127.0.0.1", 00:27:17.674 "trsvcid": "4420", 00:27:17.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:17.674 "prchk_reftag": false, 00:27:17.674 "prchk_guard": false, 00:27:17.674 "ctrlr_loss_timeout_sec": 0, 00:27:17.674 "reconnect_delay_sec": 0, 00:27:17.674 "fast_io_fail_timeout_sec": 0, 00:27:17.674 "psk": "key0", 00:27:17.674 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:17.674 "hdgst": false, 00:27:17.674 "ddgst": false 00:27:17.674 } 00:27:17.674 }, 00:27:17.674 { 00:27:17.674 "method": "bdev_nvme_set_hotplug", 00:27:17.674 "params": { 00:27:17.674 "period_us": 100000, 00:27:17.674 "enable": false 00:27:17.674 } 00:27:17.674 }, 00:27:17.674 { 00:27:17.674 "method": "bdev_wait_for_examine" 00:27:17.674 } 00:27:17.674 ] 00:27:17.674 }, 00:27:17.674 { 00:27:17.674 "subsystem": "nbd", 00:27:17.674 "config": [] 00:27:17.674 } 00:27:17.674 ] 00:27:17.674 }' 00:27:17.674 [2024-07-24 20:55:13.039951] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:27:17.674 [2024-07-24 20:55:13.040032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723673 ] 00:27:17.674 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.674 [2024-07-24 20:55:13.098611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.674 [2024-07-24 20:55:13.206817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.931 [2024-07-24 20:55:13.400519] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:18.494 20:55:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:18.494 20:55:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:18.494 20:55:14 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:18.494 20:55:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.494 20:55:14 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:18.751 20:55:14 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:18.751 20:55:14 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:18.751 20:55:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:18.751 20:55:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:18.751 20:55:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:18.751 20:55:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.751 20:55:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:19.009 20:55:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:19.009 20:55:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:19.009 20:55:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:19.009 20:55:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:19.009 20:55:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.009 20:55:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.009 20:55:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:19.266 20:55:14 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:19.266 20:55:14 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:19.266 20:55:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:19.266 20:55:14 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:19.523 20:55:15 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:19.523 20:55:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:19.523 20:55:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.C0KjtJng0G /tmp/tmp.230b7JDLUZ 00:27:19.523 20:55:15 keyring_file -- keyring/file.sh@20 -- # killprocess 1723673 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1723673 ']' 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1723673 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1723673 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1723673' 00:27:19.523 killing process with pid 1723673 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@969 -- # kill 1723673 00:27:19.523 Received shutdown signal, test time was about 1.000000 seconds 00:27:19.523 00:27:19.523 Latency(us) 00:27:19.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.523 =================================================================================================================== 00:27:19.523 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:19.523 20:55:15 keyring_file -- common/autotest_common.sh@974 -- # wait 1723673 00:27:19.780 20:55:15 keyring_file -- keyring/file.sh@21 -- # killprocess 1722199 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1722199 ']' 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1722199 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1722199 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1722199' 00:27:19.780 killing process with pid 1722199 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@969 -- # kill 1722199 00:27:19.780 [2024-07-24 20:55:15.325417] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:19.780 20:55:15 keyring_file -- common/autotest_common.sh@974 -- # wait 1722199 00:27:20.344 00:27:20.344 real 0m14.263s 00:27:20.344 user 0m35.263s 00:27:20.344 sys 0m3.297s 00:27:20.344 20:55:15 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.344 20:55:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:20.344 ************************************ 00:27:20.344 END TEST keyring_file 00:27:20.344 ************************************ 00:27:20.344 20:55:15 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:27:20.344 20:55:15 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:20.344 20:55:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:20.344 20:55:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.344 20:55:15 -- common/autotest_common.sh@10 -- # set +x 00:27:20.344 ************************************ 00:27:20.344 START TEST keyring_linux 00:27:20.344 ************************************ 00:27:20.344 20:55:15 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:20.344 * Looking for test storage... 00:27:20.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:20.344 20:55:15 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:20.344 20:55:15 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.344 20:55:15 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.601 20:55:15 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.601 20:55:15 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.601 20:55:15 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.601 20:55:15 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.601 20:55:15 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.601 20:55:15 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.601 20:55:15 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:20.601 20:55:15 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:20.601 20:55:15 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:20.601 20:55:15 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:20.601 20:55:15 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:20.601 20:55:15 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:20.601 20:55:15 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:20.601 20:55:15 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:20.601 /tmp/:spdk-test:key0 00:27:20.601 20:55:15 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:20.601 20:55:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:20.601 20:55:15 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:20.601 20:55:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:20.601 20:55:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:20.601 /tmp/:spdk-test:key1 00:27:20.601 20:55:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1724035 00:27:20.601 20:55:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:20.601 20:55:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1724035 00:27:20.601 20:55:16 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1724035 ']' 00:27:20.601 20:55:16 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.602 20:55:16 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.602 20:55:16 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.602 20:55:16 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.602 20:55:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:20.602 [2024-07-24 20:55:16.062203] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:27:20.602 [2024-07-24 20:55:16.062301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724035 ] 00:27:20.602 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.602 [2024-07-24 20:55:16.123688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.859 [2024-07-24 20:55:16.246113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:21.117 20:55:16 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:21.117 [2024-07-24 20:55:16.526988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.117 null0 00:27:21.117 [2024-07-24 20:55:16.559040] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:21.117 [2024-07-24 20:55:16.559610] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.117 20:55:16 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:21.117 273507199 00:27:21.117 20:55:16 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:21.117 607249262 00:27:21.117 20:55:16 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1724167 00:27:21.117 20:55:16 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:21.117 20:55:16 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1724167 /var/tmp/bperf.sock 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1724167 ']' 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.117 20:55:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:21.117 [2024-07-24 20:55:16.623325] Starting SPDK v24.09-pre git sha1 2ce15115b / DPDK 24.03.0 initialization... 00:27:21.117 [2024-07-24 20:55:16.623399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724167 ] 00:27:21.117 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.375 [2024-07-24 20:55:16.685355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.375 [2024-07-24 20:55:16.806384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.308 20:55:17 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.308 20:55:17 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:22.308 20:55:17 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:22.308 20:55:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:22.308 20:55:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:22.308 20:55:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:22.873 20:55:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:22.873 20:55:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:22.873 [2024-07-24 20:55:18.360880] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:22.873 nvme0n1 00:27:23.130 20:55:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:23.130 20:55:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:23.130 20:55:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:23.130 20:55:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:23.130 20:55:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:23.130 20:55:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.388 20:55:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:23.388 20:55:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:23.388 20:55:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:23.388 20:55:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:23.388 20:55:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:23.388 20:55:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.388 20:55:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:23.646 20:55:18 keyring_linux -- keyring/linux.sh@25 -- # sn=273507199 00:27:23.646 20:55:18 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:23.646 20:55:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:23.646 20:55:18 keyring_linux -- keyring/linux.sh@26 -- # [[ 273507199 == \2\7\3\5\0\7\1\9\9 ]] 00:27:23.646 20:55:18 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 273507199 00:27:23.646 20:55:18 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:23.646 20:55:18 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:23.646 Running I/O for 1 seconds... 00:27:24.579 00:27:24.579 Latency(us) 00:27:24.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.579 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:24.579 nvme0n1 : 1.01 6548.98 25.58 0.00 0.00 19414.35 11990.66 32816.55 00:27:24.579 =================================================================================================================== 00:27:24.579 Total : 6548.98 25.58 0.00 0.00 19414.35 11990.66 32816.55 00:27:24.579 0 00:27:24.579 20:55:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:24.579 20:55:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:24.837 20:55:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:24.837 20:55:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:24.837 20:55:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:24.837 20:55:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:24.837 20:55:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:24.837 20:55:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:25.095 20:55:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:25.095 20:55:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:25.095 20:55:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:25.095 20:55:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:25.095 20:55:20 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:27:25.095 20:55:20 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:25.095 20:55:20 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:25.095 20:55:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:25.095 20:55:20 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:25.095 20:55:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:25.095 20:55:20 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:25.095 20:55:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:25.353 [2024-07-24 20:55:20.837770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:25.353 [2024-07-24 20:55:20.838123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234c030 (107): Transport endpoint is not connected 00:27:25.353 [2024-07-24 20:55:20.839113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234c030 (9): Bad file descriptor 00:27:25.353 [2024-07-24 20:55:20.840111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:25.353 [2024-07-24 20:55:20.840130] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:25.353 [2024-07-24 20:55:20.840166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:25.353 request: 00:27:25.353 { 00:27:25.353 "name": "nvme0", 00:27:25.353 "trtype": "tcp", 00:27:25.353 "traddr": "127.0.0.1", 00:27:25.353 "adrfam": "ipv4", 00:27:25.353 "trsvcid": "4420", 00:27:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:25.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:25.353 "prchk_reftag": false, 00:27:25.353 "prchk_guard": false, 00:27:25.353 "hdgst": false, 00:27:25.353 "ddgst": false, 00:27:25.353 "psk": ":spdk-test:key1", 00:27:25.353 "method": "bdev_nvme_attach_controller", 00:27:25.353 "req_id": 1 00:27:25.353 } 00:27:25.353 Got JSON-RPC error response 00:27:25.353 response: 00:27:25.353 { 00:27:25.353 "code": -5, 00:27:25.353 "message": "Input/output error" 00:27:25.353 } 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@33 -- # sn=273507199 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 273507199 00:27:25.353 1 links removed 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@33 -- # sn=607249262 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 607249262 00:27:25.353 1 links removed 00:27:25.353 20:55:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1724167 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1724167 ']' 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1724167 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1724167 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1724167' 00:27:25.353 killing process with pid 1724167 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@969 -- # kill 1724167 00:27:25.353 Received shutdown signal, test time was about 1.000000 seconds 00:27:25.353 00:27:25.353 Latency(us) 00:27:25.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.353 =================================================================================================================== 00:27:25.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:25.353 20:55:20 keyring_linux -- common/autotest_common.sh@974 -- # wait 1724167 00:27:25.611 20:55:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1724035 00:27:25.611 20:55:21 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1724035 ']' 00:27:25.611 20:55:21 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1724035 00:27:25.611 20:55:21 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:25.611 20:55:21 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.611 20:55:21 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1724035 00:27:25.870 20:55:21 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:25.870 20:55:21 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:25.870 20:55:21 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1724035' 00:27:25.870 killing process with pid 1724035 00:27:25.870 20:55:21 keyring_linux -- common/autotest_common.sh@969 -- # kill 1724035 00:27:25.870 20:55:21 keyring_linux -- common/autotest_common.sh@974 -- # wait 1724035 00:27:26.128 00:27:26.128 real 0m5.808s 00:27:26.128 user 0m11.118s 00:27:26.128 sys 0m1.652s 00:27:26.128 20:55:21 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:26.128 20:55:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:26.128 ************************************ 00:27:26.128 END TEST keyring_linux 00:27:26.128 ************************************ 00:27:26.128 20:55:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:27:26.128 20:55:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:26.128 20:55:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:26.128 20:55:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:26.128 20:55:21 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:27:26.128 20:55:21 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:27:26.128 20:55:21 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:27:26.128 20:55:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:26.128 20:55:21 -- common/autotest_common.sh@10 -- # set +x 00:27:26.128 20:55:21 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:27:26.386 20:55:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:26.386 20:55:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:26.386 20:55:21 -- common/autotest_common.sh@10 -- # set +x 00:27:28.286 INFO: APP EXITING 00:27:28.286 INFO: killing all VMs 00:27:28.286 INFO: killing vhost app 00:27:28.286 INFO: EXIT DONE 00:27:29.220 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:29.220 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:29.220 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:29.220 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:29.220 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:29.220 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:29.220 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:29.220 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:29.220 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:29.220 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:29.220 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:29.220 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:29.220 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:29.220 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:29.220 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:29.220 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:29.220 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:30.593 Cleaning 00:27:30.593 Removing: /var/run/dpdk/spdk0/config 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:30.593 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:30.593 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:30.593 Removing: /var/run/dpdk/spdk1/config 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:30.593 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:30.593 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:30.593 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:30.593 Removing: /var/run/dpdk/spdk2/config 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:30.593 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:30.593 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:30.593 Removing: /var/run/dpdk/spdk3/config 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:30.593 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:30.593 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:30.593 Removing: /var/run/dpdk/spdk4/config 00:27:30.593 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:30.593 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:30.593 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:30.593 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:30.593 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:30.593 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:30.593 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:30.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:30.594 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:30.594 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:30.594 Removing: /dev/shm/bdev_svc_trace.1 00:27:30.594 Removing: /dev/shm/nvmf_trace.0 00:27:30.594 Removing: /dev/shm/spdk_tgt_trace.pid1467227 00:27:30.594 Removing: /var/run/dpdk/spdk0 00:27:30.594 Removing: /var/run/dpdk/spdk1 00:27:30.594 Removing: /var/run/dpdk/spdk2 00:27:30.594 Removing: /var/run/dpdk/spdk3 00:27:30.594 Removing: /var/run/dpdk/spdk4 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1465560 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1466299 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1467227 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1467664 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1468356 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1468498 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1469214 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1469260 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1469480 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1470794 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1471708 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1471896 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1472090 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1472410 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1472598 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1472755 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1472915 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1473098 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1473407 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1475761 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1475925 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1476218 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1476358 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1476664 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1476800 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1477232 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1477235 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1477529 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1477535 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1477799 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1477843 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1478420 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1478599 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1478796 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1481281 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1483998 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1490975 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1491385 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1493902 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1494173 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1496693 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1500401 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1502599 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1509118 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1514429 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1515786 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1516965 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1527311 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1529595 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1555521 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1559337 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1563163 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1567137 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1567139 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1567796 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1568394 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1568993 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1569392 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1569471 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1569653 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1569788 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1569791 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1570447 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1570984 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1571647 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1572042 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1572047 00:27:30.594 Removing: /var/run/dpdk/spdk_pid1572308 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1573208 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1573925 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1579138 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1604570 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1607484 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1608667 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1610614 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1610752 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1610794 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1610904 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1611337 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1612657 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1613394 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1613820 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1615461 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1615890 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1616448 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1618964 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1624859 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1627637 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1631540 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1632619 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1633725 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1636420 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1638667 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1642873 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1642878 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1646389 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1646531 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1646741 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1647055 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1647064 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1649822 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1650270 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1652815 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1654797 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1658210 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1661541 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1667891 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1672360 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1672362 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1685334 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1685864 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1686402 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1686816 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1687389 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1687805 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1688218 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1688739 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1691240 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1691504 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1695295 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1695481 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1697092 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1702124 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1702129 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1704903 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1706308 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1707749 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1708563 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1710047 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1711113 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1716817 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1717134 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1717525 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1719077 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1719357 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1719757 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1722199 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1722210 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1723673 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1724035 00:27:30.852 Removing: /var/run/dpdk/spdk_pid1724167 00:27:30.852 Clean 00:27:30.852 20:55:26 -- common/autotest_common.sh@1451 -- # return 0 00:27:30.852 20:55:26 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:27:30.853 20:55:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:30.853 20:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:31.111 20:55:26 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:27:31.111 20:55:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:31.111 20:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:31.111 20:55:26 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:31.111 20:55:26 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:31.111 20:55:26 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:31.111 20:55:26 -- spdk/autotest.sh@395 -- # hash lcov 00:27:31.111 20:55:26 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:31.111 20:55:26 -- spdk/autotest.sh@397 -- # hostname 00:27:31.111 20:55:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:31.111 geninfo: WARNING: invalid characters removed from testname! 00:27:57.674 20:55:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:01.859 20:55:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:05.145 20:56:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:07.677 20:56:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:10.975 20:56:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:13.506 20:56:08 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:16.799 20:56:11 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:16.799 20:56:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.799 20:56:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:16.799 20:56:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.799 20:56:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.799 20:56:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.799 20:56:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.799 20:56:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.799 20:56:11 -- paths/export.sh@5 -- $ export PATH 00:28:16.799 20:56:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.799 20:56:11 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:16.799 20:56:11 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:16.799 20:56:11 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721847371.XXXXXX 00:28:16.799 20:56:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721847371.84ERkR 00:28:16.799 20:56:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:16.799 20:56:11 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:16.799 20:56:11 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:16.799 20:56:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:16.799 20:56:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:16.799 20:56:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:16.799 20:56:11 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:28:16.799 20:56:11 -- common/autotest_common.sh@10 -- $ set +x 00:28:16.799 20:56:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:16.799 20:56:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:16.799 20:56:11 -- pm/common@17 -- $ local monitor 00:28:16.799 20:56:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:16.799 20:56:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:16.799 20:56:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:16.799 20:56:11 -- pm/common@21 -- $ date +%s 00:28:16.799 20:56:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:16.799 20:56:11 -- pm/common@21 -- $ date +%s 00:28:16.799 20:56:11 -- pm/common@25 -- $ sleep 1 00:28:16.799 20:56:11 -- pm/common@21 -- $ date +%s 00:28:16.799 20:56:11 -- pm/common@21 -- $ date +%s 00:28:16.799 20:56:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721847371 00:28:16.799 20:56:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721847371 00:28:16.799 20:56:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721847371 00:28:16.799 20:56:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721847371 00:28:16.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721847371_collect-vmstat.pm.log 00:28:16.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721847371_collect-cpu-load.pm.log 00:28:16.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721847371_collect-cpu-temp.pm.log 00:28:16.799 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721847371_collect-bmc-pm.bmc.pm.log 00:28:17.367 20:56:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:17.367 20:56:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:17.367 20:56:12 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:17.367 20:56:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:17.367 20:56:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:17.367 20:56:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:17.367 20:56:12 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:17.367 20:56:12 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:17.367 20:56:12 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:17.367 20:56:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:17.367 20:56:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:17.367 20:56:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:17.367 20:56:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:17.367 20:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:17.367 20:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:17.367 20:56:12 -- pm/common@44 -- $ pid=1733835 00:28:17.367 20:56:12 -- pm/common@50 -- $ kill -TERM 1733835 00:28:17.367 20:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:17.367 20:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:17.367 20:56:12 -- pm/common@44 -- $ pid=1733837 00:28:17.367 20:56:12 -- pm/common@50 -- $ kill -TERM 1733837 00:28:17.367 20:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:17.367 20:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:17.367 20:56:12 -- pm/common@44 -- $ pid=1733839 00:28:17.367 20:56:12 -- pm/common@50 -- $ kill -TERM 1733839 00:28:17.367 20:56:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:17.367 20:56:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:17.367 20:56:12 -- pm/common@44 -- $ pid=1733871 00:28:17.367 20:56:12 -- pm/common@50 -- $ sudo -E kill -TERM 1733871 00:28:17.624 + [[ -n 1381877 ]] 00:28:17.624 + sudo kill 1381877 00:28:17.632 [Pipeline] } 00:28:17.642 [Pipeline] // stage 00:28:17.646 [Pipeline] } 00:28:17.655 [Pipeline] // timeout 00:28:17.658 [Pipeline] } 00:28:17.668 [Pipeline] // catchError 00:28:17.672 [Pipeline] } 00:28:17.683 [Pipeline] // wrap 00:28:17.686 [Pipeline] } 00:28:17.696 [Pipeline] // catchError 00:28:17.704 [Pipeline] stage 00:28:17.706 [Pipeline] { (Epilogue) 00:28:17.718 [Pipeline] catchError 00:28:17.720 [Pipeline] { 00:28:17.732 [Pipeline] echo 00:28:17.733 Cleanup processes 00:28:17.738 [Pipeline] sh 00:28:18.059 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:18.059 1733992 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:18.059 1734099 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:18.072 [Pipeline] sh 00:28:18.352 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:18.352 ++ grep -v 'sudo pgrep' 00:28:18.352 ++ awk '{print $1}' 00:28:18.352 + sudo kill -9 1733992 00:28:18.362 [Pipeline] sh 00:28:18.640 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:28.613 [Pipeline] sh 00:28:28.894 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:28.894 Artifacts sizes are good 00:28:28.908 [Pipeline] archiveArtifacts 00:28:28.916 Archiving artifacts 00:28:29.095 [Pipeline] sh 00:28:29.378 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:29.394 [Pipeline] cleanWs 00:28:29.404 [WS-CLEANUP] Deleting project workspace... 00:28:29.404 [WS-CLEANUP] Deferred wipeout is used... 00:28:29.411 [WS-CLEANUP] done 00:28:29.413 [Pipeline] } 00:28:29.434 [Pipeline] // catchError 00:28:29.449 [Pipeline] sh 00:28:29.728 + logger -p user.info -t JENKINS-CI 00:28:29.738 [Pipeline] } 00:28:29.757 [Pipeline] // stage 00:28:29.763 [Pipeline] } 00:28:29.780 [Pipeline] // node 00:28:29.785 [Pipeline] End of Pipeline 00:28:29.824 Finished: SUCCESS